Abstract
We propose a camera model for linescan cameras with telecentric lenses. The camera model assumes a linear relative motion with constant velocity between the camera and the object. It allows to model lens distortions, while supporting arbitrary positions of the line sensor with respect to the optical axis. We comprehensively examine the degeneracies of the camera model and propose methods to handle them. Furthermore, we examine the relation of the proposed camera model to affine cameras. In addition, we propose an algorithm to calibrate telecentric linescan cameras using a planar calibration object. We perform an extensive evaluation of the proposed camera model that establishes the validity and accuracy of the proposed model. We also show that even for lenses with very small lens distortions, the distortions are statistically highly significant. Therefore, they cannot be omitted in realworld applications.
Introduction
Linescan cameras play an important role in machine vision applications because they offer a better resolution per price in comparison to areascan cameras. Today, linescan cameras with lines of up to 16,384 pixels are available (Steger et al. 2018, Chapter 2.3.4). The height of the resulting image is essentially unlimited because it corresponds to the number of 1D images acquired over time, as described in more detail below. Hence, several hundred megapixels per image can be achieved easily.
In contrast to areascan cameras, the sensor of a linescan camera consists of a single line of photosensitive elements. Consequently, the image that is obtained from a linescan camera would be one pixel high. To obtain a 2D image that can be processed in computer vision applications, multiple 1D images are stacked over time while moving the sensor with respect to the object that is to be imaged. In machine vision applications, the relative motion is realized either by mounting the camera above the moving object or by moving the camera across the stationary object (Steger et al. 2018, Chapter 2.3.1.1). The motion can be effected, for example, by a conveyor belt, a linear motion slide, or other linear actuators. For practical applications, it is therefore not sufficient to calibrate the single sensor line only. Instead, also the process of creating the 2D image must be included in the calibration.
Obviously, the resulting image strongly depends on the relative motion of the camera with respect to the object. In almost all machine vision applications, a linear motion is applied. This requires the camera to move with constant velocity along a straight line relative to the object while the orientation of the camera is constant with respect to the object. Furthermore, the motion must be equal for all images (Gupta and Hartley 1997). In other application domains, e.g., in remote sensing, more general motion models are applied. For example, the motion of an airborne or spaceborne camera can be modeled by discrete positions and orientations (Haala et al. 1998) or approximated by polynomial functions (Lee et al. 2000; Poli 2007). Since our focus is on machine vision applications, we will assume a linear motion in this paper. In practice, a linear motion can be realized by using appropriate encoders that ensure a constant speed (Steger et al. 2018, Chapter 2.3.1.1; Beyerer et al. 2016, Chapter 6.8).
Because typical readout rates of linescan cameras are in the range of 10–200 kHz (Steger et al. 2018, Chapter 2.3.1.1), in some applications the exposure time of each line needs to be very short. Therefore, linescan applications often require a very bright illumination. Fortunately, the illumination only needs to cover the narrow footprint of the sensor line, and hence often lineshaped LED or halogen light sources are used. Another consequence of the short exposure time is that often an open diaphragm, i.e., a large aperture, is used to maximize the amount of light that passes through the lens. This must be taken into account when choosing an appropriate hardware setup because a large aperture severely limits the depth of field.
For linescan cameras with linear motion and a conventional entocentric (i.e., perspective) lens, an appropriate camera model and calibration procedure have been described in Steger et al. (2018, Chapter 3.9.3). Because of the perspective projection of an entocentric lens, objects that are closer to the lens produce a larger image. Therefore, this kind of linescan camera performs a perspective projection in the direction of the line sensor and a parallel projection perpendicular to the line sensor.
In contrast to entocentric lenses, telecentric lenses perform a parallel projection of the world into the image (Steger et al. 2018, Chapter 2.2.4). In many machine vision applications, and especially in measurement applications, telecentric lenses are preferred over entocentric lenses because they eliminate perspective distortions, which is especially important for gauging applications when nonflat objects must be inspected. Also, selfocclusions of objects that would occur because of perspective distortions are avoided (Luster and Batchelor 2012, Chapter 6.1).
Steger (2017, Section 4) and Steger et al. (2018, Chapter 2.2.4) show that the placement of the aperture stop is crucial for the projection properties of a lens. For entocentric lenses, the aperture stop is placed between the two focal points of the lens system. For telecentric lenses that perform a parallel projection in object space, the aperture stop is placed at the imageside focal point (Steger et al. 2018, Chapter 2.2.4; Lenhardt 2017, Chapter 4.2.14; Beyerer et al. 2016, Chapter 3.4.5). This effectively filters out all light rays that are not parallel to the optical axis in object space. Owing to the parallel projection in object space, the lens must be chosen at least as large as the object to be imaged.
Because of the advantages of linescan cameras and telecentric lenses, many machine vision applications would benefit from a camera model for linescan cameras with telecentric lenses. Obviously, a camera model and an appropriate calibration procedure are essential for accurate 2D measurements in world units. The calibration result facilitates the removal of image distortions and the rectification from the image to a world plane.
In addition to accurate measurement tasks, the calibration of linescan cameras is important for many other applications. For example, if the motion direction is not perfectly perpendicular to the sensor line, i.e., has a nonzero motion component in the direction of the sensor line, skew pixels will be obtained (Steger et al. 2018, Chapter 3.9.3.4). Furthermore, if the speed of the motion is not perfectly adjusted, rectangular instead of square pixels will be obtained. Both effects cause many image processing operations that (often implicitly) assume square pixels to return false or unusable results. Examples are segmentation and feature extraction (e.g., the computation of moments or shape features from segmented regions), 2D template matching approaches, and stereo matching. Unfortunately, a perfect alignment of the camera is very cumbersome to realize in practice. For example, a sensor line with 16,384 pixels would have to be mounted with an accuracy of \(1/140^\circ \) in order to keep the total skew of the image below one pixel. Camera calibration allows us to rectify the images in order to eliminate lens distortions or skew, for example, and to ensure square pixels, hence making an exact alignment of the sensor line unnecessary.
In this paper, we introduce a versatile camera model for linescan cameras with telecentric lenses. We first discuss work relating to camera models for entocentric linescan cameras in Sect. 2. We then discuss the camera models for areascan and entocentric linescan cameras on which our model is based in Sect. 3. In Sect. 4, we describe our camera model for telecentric linescan cameras, the calibration of its parameters, its relation to affine cameras, as well as the camera model’s degeneracies and how to handle them. Various experiments that establish the validity and accuracy of the model are described in Sect. 5. Finally, Sect. 6 concludes the paper.
Our main contributions are the following:

We propose a comprehensive and versatile camera model for linescan cameras with telecentric lenses. The camera model allows to model a very large class of lens distortions. It does not assume that the sensor line is aligned with the optical axis of the lens. To the best of our knowledge, no camera model for linescan cameras with telecentric lenses has ever been proposed in the scientific literature.

The parameterization of the camera model is very intuitive for machine vision users. All parameters have a physical meaning that is easy to understand.

We prove that for the division model of lens distortions, the projection of a 3D point to the image can be computed analytically.

We establish that images of telecentric linescan cameras with lens distortions and a potential skew can be rectified to have no lens distortions and no skew without knowing the 3D geometry of the scene in the image. This is in contrast to linescan cameras with entocentric lenses, where this is impossible in general.

We propose a camera calibration algorithm that determines the camera parameters using images of a planar calibration object.

We examine how telecentric linescan cameras without lens distortions are related to affine cameras and prove that every affine camera be regarded as a telecentric linescan camera with appropriately chosen interior orientation parameters. We also show that every telecentric linescan camera without lens distortions has an equivalent areascan camera with a bilateral telecentric tilt lens.

We comprehensively examine the degeneracies of the camera model and propose methods to handle them.

We perform an extensive evaluation that establishes the validity and versatility of our camera model.

We show that even for lenses with very small lens distortions, the distortions are statistically highly significant and therefore cannot be omitted in realworld applications.
Related Work
We have been unable to find any research relating to camera models for linescan cameras with telecentric lenses. The research closest to the camera model we will propose is research on camera models for linescan cameras with entocentric lenses. Therefore, we will discuss these approaches in this section. In our experience, it is very important for machine vision users to have a camera model with explicit parameters that are easy to understand and have a physical meaning. Hence, in the taxonomy of Sturm et al. (2010, Section 3), we require a global and not a local or discrete camera model. Therefore, although in principle any camera could be modeled by a generalized camera model, such as those described by Sturm et al. (2010, Sections 3.2 and 3.3) or Ramalingam and Sturm (2017), we do not consider and discuss these approaches.
Camera models for linescan cameras with entocentric lenses can be grouped into two categories: static and dynamic camera models. In static camera models, the motion of the camera is not taken into account in the camera model and therefore is not calibrated. In contrast, dynamic camera models take the camera motion into account and calibrate it. In dynamic models, a linear camera motion is typically assumed. As discussed in Sect. 1, the ability to model the linear camera motion is essential in machine vision applications. Therefore, static camera models are less relevant there.
Another distinction is whether a camera model is able to model lens distortions. Machine vision applications often have very high accuracy requirements that can only be achieved if the camera model is able to model lens distortions accurately. Furthermore, the large size of the sensors also makes the ability to model lens distortions essential. For example, a maximum lens distortion of 0.1% (a figure often specified in data sheets of telecentric lenses) will cause a distortion of more than 8 pixels at the left and right edges of the image for a linescan sensor with 16,384 pixels.
Even if a camera model supports lens distortions, often the model assumes that the sensor line is mounted exactly behind the principal point, i.e., that the optical axis intersects the sensor line. In a real camera, this assumption may not be fulfilled. In fact, in photogrammetry and remote sensing, multiple sensor lines are often mounted behind the lens at large distances to the principal point (Chen et al. 2003). Therefore, to provide general applicability, the camera model should be able to represent cameras in which the line sensor is not mounted directly behind the principal point.
A final distinction is whether a planar (2D) or a 3D calibration object is used to calibrate the camera. In machine vision applications, there is often limited space in the machine where the camera must be calibrated. Here, 3D calibration objects may be too cumbersome to handle or may not even fit into the available space. In contrast, planar calibration targets typically are much easier to handle for the users. Furthermore, a backlight illumination is used frequently. For these applications, 2D calibration targets are much easier to manufacture. Finally, 2D calibration targets can be produced more cheaply and accurately than 3D calibration objects.
As a result of the above discussion, we require that the camera model is dynamic, supports lens distortions, allows the line sensor to be mounted anywhere with respect to the principal point or optical axis, and uses a planar calibration object. In the following, we will not discuss every camera model in detail. Instead, we will only mention the requirements that the respective model does not fulfill.
Static entocentric linescan camera models are described by Horaud et al. (1993), Luna et al. (2010), Lilienblum et al. (2013), Yao et al. (2014), Sun et al. (2016a, (2016b, (2017), Niu et al. (2018) and Song et al. (2018). The camera models by Horaud et al. (1993) and Luna et al. (2010) do not model lens distortions. The camera models by Lilienblum et al. (2013), Yao et al. (2014), Niu et al. (2018), Sun et al. (2016a, (2016b, (2017) and Song et al. (2018) model lens distortions, but assume that the sensor is mounted exactly behind the principal point. 3D calibration objects are used by Luna et al. (2010), Lilienblum et al. (2013), Niu et al. (2018) and Song et al. (2018). Finally, Sun et al. (2016b) use an additional areascan camera to calibrate the linescan camera, which is undesirable since it increases the cost of the camera setup.
Dynamic entocentric linescan camera models are proposed by Gupta and Hartley (1997), MVTec Software GmbH (2005a, (2005b), Steger et al. (2008, (2018), Draréni et al. (2011), Hui et al. (2012a, (2012b, (2013) Donné et al. (2017) and Zhang et al. (2018). Lens distortions are not modeled in the camera models by Gupta and Hartley (1997), Draréni et al. (2011), Hui et al. (2012b), Donné et al. (2017) and Zhang et al. (2018). The camera models by Hui et al. (2012a, (2013) take into account lens distortions, but assume that the sensor is mounted exactly behind the principal point. 3D calibration objects are used by Hui et al. (2012a, (2012b). Furthermore, Hui et al. (2013) use an additional areascan camera to calibrate the linescan camera, which is undesirable for the reasons that were mentioned previously. A camera model that fulfills all of the above requirements (i.e., a dynamic camera model that supports lens distortions, allows an arbitrary line sensor position with respect to the principal point, and uses a planar calibration object) is described by MVTec Software GmbH (2005a, (2005b) and Steger et al. (2008, (2018) . We will describe it in more detail in Sect. 3.2 and use it to develop our proposed model for telecentric linescan cameras.
Fundamental Camera Models
The camera model for linescan cameras with telecentric lenses that we will propose in Sect. 4 is based on camera models for areascan cameras and on a camera model for linescan cameras with entocentric lenses. Therefore, we will discuss these models first. Our presentation is based on the descriptions in Steger et al. (2018, Chapter 3.9) and Steger (2017, Section 6).
We start by discussing camera models for areascan cameras since we will later model linescan cameras conceptually as one particular line of an areascan camera. This will enable us to model that the line sensor may not be perfectly aligned with the optical axis of the lens. This, in turn, will allow us to model a more general class of lens distortions for linescan cameras.
Furthermore, we will describe some of the properties of the existing camera models. This will allow us to compare the properties of the camera model for telecentric linescan cameras with those of the existing camera models in Sect. 4.
Camera Models for AreaScan Cameras
The camera model for areascan cameras is capable of modeling a multiview setup with \(n_\mathrm {c}\) cameras (Steger 2017, Section 6.1). In this paper, we will only consider single cameras. Therefore, we simplify the discussion to this case.
To calibrate the camera, \(n_\mathrm {o}\) images of a calibration object in different poses are used. Each pose l (\(l = 1, \ldots , n_\mathrm {o}\)) of the calibration object defines a transformation from the calibration object coordinate system to the camera coordinate system. The transformation of a point \({{\varvec{p}}}_\mathrm {o} = (x_\mathrm {o}, y_\mathrm {o}, z_\mathrm {o})^{\top }\) is given by
where \({{\varvec{t}}}_l = (t_{l,x}, t_{l,y}, t_{l,z})^{\top }\) is a translation vector and \({{{\mathbf {\mathtt{{R}}}}}}_l\) is a rotation matrix that is parameterized by Euler angles: \({{{\mathbf {\mathtt{{R}}}}}}_l = {{{\mathbf {\mathtt{{R}}}}}}_x(\alpha _l) {{{\mathbf {\mathtt{{R}}}}}}_y(\beta _l) {{{\mathbf {\mathtt{{R}}}}}}_z(\gamma _l)\). The transformation can also be written as a \(4 \times 4\) homogeneous matrix:
where it is silently assumed that \({{\varvec{p}}}_l\) and \({{\varvec{p}}}_\mathrm {o}\) have been extended with a fourth coordinate of 1.
As discussed by Steger (2017, Section 6.1) and Ulrich and Steger (2019), the origin of the camera coordinate system lies at the center of the entrance pupil of the lens.
Next, the point \({{\varvec{p}}}_l = (x_l, y_l, z_l)^{\top }\) is projected into the image plane. For entocentric lenses, the projection is given by:
where c is the principal distance of the lens. For telecentric lenses, the projection is given by:
where m is the magnification of the lens.^{Footnote 1}
Subsequently, the undistorted point \((x_\mathrm {u}, y_\mathrm {u})^{\top }\) is distorted to a point \((x_\mathrm {d}, y_\mathrm {d})^{\top }\). We support two distortion models (Steger 2017, Section 6.1; Steger et al. 2018, Chapter 3.9.1.3): the division model (Lenz 1987, 1988; Lenz and Fritsch 1990; Lanser et al. 1995; Lanser 1997; Blahusch et al. 1999; Fitzgibbon 2001; Steger 2012) and the polynomial model (Brown 1966, 1971).
In the division model, the undistorted point \((x_\mathrm {u}, y_\mathrm {u})^{\top }\) is computed from the distorted point by:
where \(r_\mathrm {d}^2 = x_\mathrm {d}^2 + y_\mathrm {d}^2\). The division model can be inverted analytically:
where \(r_\mathrm {u}^2 = x_\mathrm {u}^2 + y_\mathrm {u}^2\). The division model only supports radial distortion.
The polynomial model supports radial as well as decentering distortions. The undistorted point is computed by:
The polynomial model cannot be inverted analytically. The computation of the distorted point from the undistorted point must be performed by a numerical root finding algorithm.
Finally, the distorted point \((x_\mathrm {d}, y_\mathrm {d})^{\top }\) is transformed into the image coordinate system:
Here, \(s_x\) and \(s_y\) denote the pixel pitches on the sensor and \((c_x, c_y)^{\top }\) is the principal point. Note that x refers to the horizontal axis of the image (increasing rightward) and y to the vertical axis (increasing downward).
The calibration of the camera model is described in detail in Steger (2017, Sections 9 and 10).
Remark 1
The above parameterization is very intuitive for machine vision users (Steger 2017, Section 6.1). All parameters have a physical meaning that is easy to understand. Approximate initial values for the interior orientation parameters simply can be read off the data sheets of the camera (\(s_x\) and \(s_y\)) and the lens (c or m) or can be obtained easily otherwise (the initial values for the principal point can be set to the center of the image and the distortion coefficients can typically be set to 0). Furthermore, the calibration results are easy to check for validity.
Remark 2
For telecentric lenses, \((c_x, c_y)^{\top }\) is solely defined by the lens distortions (Steger 2017, Remark 2). If there are no lens distortions, \((c_x, c_y)^{\top }\) and \((t_{l,x}, t_{l,y})^{\top }\) have the same effect. Therefore, in this case \((c_x, c_y)^{\top }\) should remain fixed at the initial value specified by the user (typically, the image center).
Remark 3
For telecentric lenses, the pose parameter \(t_{l,z}\) obviously cannot be determined. We arbitrarily set it to 1 m (Steger 2017, Remark 4).
Remark 4
For telecentric cameras and planar calibration objects, the rotation part of the pose can only be determined up to a twofold ambiguity from a single camera (Steger 2017, Remark 5). This is a special case of a Necker reversal (Shapiro et al. 1995, Section 4.1) when the object is planar. The two sets of pose parameters \((\alpha _l, \beta _l, \gamma _l)\) and \((\alpha _l, \beta _l, \gamma _l)\) (with identical translation vectors) result in the same points in the image (Steger 2018, Section 2.4). If a correct exterior orientation of the calibration object is required in the application, the user must resolve this ambiguity by selecting the correct pose based on prior knowledge.
Remark 5
An operation that we will use below is the calculation of the optical ray of an image point (also called camera ray or line of sight). We first invert (8):
Then, we rectify the lens distortions by applying (5) or (7). Now, for entocentric lenses, the optical ray is given by:
while for telecentric lenses, it is given by:
Remark 6
In machine vision applications, it is often desirable to remove the lens distortions from an image or from data, such as subpixelprecise contours, that were extracted from an image. For example, fitting a line to a contour only returns useful results if the lens distortions have been removed. Since (5) and (7) represent transformations that are performed purely within the image plane, this can easily be achieved. Furthermore, often it is also desirable to remove perspective distortions as well as lens distortions. This can be achieved from a single image if the scene exhibits a known geometry, e.g., a plane, by intersecting the optical ray with the plane (Steger et al. 2018, Section 3.9.5.4).
Camera Model for LineScan Cameras with Entocentric Lenses
The camera model for linescan cameras was first described in MVTec Software GmbH (2005a, (2005b) and Steger et al. (2008). Our discussion is based on Steger et al. (2018, Chapter 3.9.3).
As described in Sect. 1, we assume that the relative motion between the linescan camera and the object is linear with constant velocity. Therefore, the camera motion can be described by the motion vector \({{\varvec{v}}} = (v_x, v_y, v_z)^{\top }\). The vector \({{\varvec{v}}}\) is described in units of meters per scan line in the camera coordinate system (i.e., the units are m Pixel\(^{1}\)).^{Footnote 2} This definition of \({{\varvec{v}}}\) assumes a moving camera and a fixed object. If the camera is stationary and the object is moving, e.g., on a conveyor belt, we can simply use \({{\varvec{v}}}\) as the motion vector (see Fig. 1).
The camera model for linescan cameras is displayed in Fig. 1. The origin of the camera coordinate system lies at the center of the entrance pupil of the lens. The \({{\varvec{z}}}\) axis is identical to the optical axis and is oriented such that points in front of the camera have positive z coordinates. The \({{\varvec{x}}}\) axis is parallel to the sensor line and perpendicular to the \({{\varvec{z}}}\) axis. It points rightward in the image. The \({{\varvec{y}}}\) axis is perpendicular to the sensor line and to the \({{\varvec{z}}}\) axis such that a right handed coordinate system is obtained.
Like for areascan cameras, the transformation from the calibration object coordinate system to the camera coordinate system is given by (1). In contrast to areascan cameras, this exterior orientation refers only to the first line of the image. Since the camera moves relative to the object, the exterior orientation is different for each line. However, because we assume a linear motion, the motion vector \({{\varvec{v}}}\) can be used to compute the exterior orientation of all lines. Therefore, the single exterior orientation is sufficient.
Since we want to be able to model linescan cameras for which the sensor line is not perfectly aligned with the optical axis, we model the sensor line as one particular line of a virtual areascan camera. We use the principal point \((c_x, c_y)^{\top }\) to model this misalignment (cf. Fig. 1). The semantics of \(c_y\) are slightly different than for areascan cameras: \(c_y = 0\) signifies that the sensor line is perfectly aligned with the optical axis in the \({{\varvec{y}}}\) direction.
The remaining parameters of the model are identical to those of areascan cameras (see Sect. 3.1): c is the principal distance, the lens distortions are described by (5) or (7), and \(s_x\) and \(s_y\) describe the pixel pitch on the sensor.
To compute the projection of a point \({{\varvec{p}}}_\mathrm {c} = (x_\mathrm {c}, y_\mathrm {c}, z_\mathrm {c})^{\top }\) that has been transformed into the camera coordinate system,^{Footnote 3} we can use the fact that \({{\varvec{p}}}_\mathrm {c}\) moves along the straight line \({{\varvec{p}}}_\mathrm {c}  t {{\varvec{v}}}\), where t denotes the number of scan lines that have been acquired since the first scan line. As the point moves, it must at some point intersect the optical ray of an image point if it projects to a point \({{\varvec{p}}}_\mathrm {s} = (x_\mathrm {s}, 0)^{\top }\) on the sensor line.^{Footnote 4} This optical ray is given by (9) and (10).
Let us assume that we have transformed the point \({{\varvec{p}}}_\mathrm {s}\) to a distorted image point \({{\varvec{p}}}_\mathrm {d}\) by (9), i.e., \((x_\mathrm {d}, y_\mathrm {d})^{\top }= (s_x (x_\mathrm {s}  c_x), s_y c_y)^{\top }\). Furthermore, let us call the undistortion function in (5) or (7) \({{\varvec{u}}}({{\varvec{p}}}) = (u_x(x_\mathrm {d}, y_\mathrm {d}), u_y(x_\mathrm {d}, y_\mathrm {d}))^{\top }\). Then, the intersection of the moving point and the optical ray results in the following equation system:
The equation system (12), which for both distortion models is a polynomial equation system, must be solved for \(\lambda \), t, and \(x_\mathrm {d}\). Once t and \(x_\mathrm {d}\) have been determined, the point is transformed into the image coordinate system by:
Thus, the interior orientation of linescan cameras with entocentric lenses is given by: c; \(\kappa \) or \(K_1\), \(K_2\), \(K_3\), \(P_1\), \(P_2\); \(s_x\), \(s_y\), \(c_x\), \(c_y\), \(v_x\), \(v_y\), and \(v_z\).^{Footnote 5}
Remark 7
The model is overparameterized. The values of c and \(s_x\) cannot be determined simultaneously. This can be solved by fixing \(s_x\) at the initial value that was specified by the user. Furthermore, since there is only one image line, \(s_y\) is only used to specify the principal point in pixels. It has no physical meaning on the sensor and only occurs in the term \(y_\mathrm {d} = s_y c_y\). Therefore, it cannot be determined and is kept fixed at the initial value specified by the user. In practice, \(s_y = s_x\) is typically used.
Remark 8
The value of \(c_y\) is solely defined by the lens distortions. If there are no lens distortions, \(c_y\) should remain fixed at the initial value specified by the user (typically, \(c_y = 0\) is used).
Remark 9
If \(c_y = 0\), the effect of the lens distortions is purely along the sensor line, i.e., in the horizontal direction. If \(c_y \ne 0\), lens distortions will also cause the sensor line to appear bent (Steger et al. 2018, Chapter 3.9.3.4).
Remark 10
The parameters \(P_1\) and \(P_2\) of the polynomial distortion model are highly correlated with other parameters of the entocentric linescan camera model, especially if the radial distortions are small. Therefore, they typically cannot be determined reliably. Consequently, in practice, they should be set to 0 and should be excluded from the calibration.
Remark 11
In contrast to areascan cameras, where the aspect ratio of the pixels is determined by \(s_x\) and \(s_y\), the aspect ratio of the pixels is determined by \(s_x\) and \(v_y\) for linescan cameras. Furthermore, in contrast to areascan cameras, for which we have assumed orthogonal pixels, for linescan cameras, a nonzero \(v_x\) will result in pixels that appear skewed (i.e., nonorthogonal). Therefore, to achieve square pixels, it is important that the sensor is aligned to be perpendicular to the motion vector and the motion speed or the line frequency of the camera must be selected appropriately (Steger et al. 2018, Chapter 3.9.3.4).
Remark 12
A pure removal of lens distortions is impossible for entocentric linescan cameras if \(c_y \ne 0\). To remove the lens distortions, we would have to compute the optical ray for an image point and then would have to reproject it into the rectified image. However, if there are lens distortions and if \(c_y \ne 0\), the optical ray in general does not project to a single point in the rectified image. Therefore, a pure removal of lens distortions requires a 3D reconstruction to be available because we can then reconstruct a unique 3D point that we can project into the rectified image. What is possible, on the other hand, is to reproject the image onto a world plane (Steger et al. 2018, Chapter 3.9.5.4). This rectification also removes the lens distortions, of course.
Camera Model for LineScan Cameras with Telecentric Lenses
Based on the discussion in Sect. 3, we can now derive the camera model for linescan cameras with telecentric lenses.
Camera Model
The first step of the camera model is identical to that in Sect. 3.2: We transform points from the calibration object coordinate system to the camera coordinate system by (1). Again, we will call the transformed point \({{\varvec{p}}}_\mathrm {c}\).
To project the point into the image, we use the same approach as in Sect. 3.2: We intersect the line on which the point moves with the optical ray of the point to which it projects. The equation of the optical ray is given by (11). This results in the following equation system:
where \({{\varvec{u}}}({{\varvec{p}}}) = (u_x(x_\mathrm {d}, y_\mathrm {d}), u_y(x_\mathrm {d}, y_\mathrm {d}))^{\top }\) is defined as in Sect. 3.2. It can be seen that \(\lambda \) does not occur in (14) and (15). Therefore, neither \(z_\mathrm {c}\) nor \(v_z\) influence the projection and we can omit (16). Consequently, linescan cameras with telecentric lenses perform an orthographic projection, similar to areascan cameras with telecentric lenses. Thus, with respect to the taxonomy by Sturm et al. (2010), linescan cameras with telecentric lenses are central cameras (Sturm et al. 2010, Section 3), unlike linescan cameras with entocentric lenses, which are axial cameras (Ramalingam et al. 2006; Sturm et al. 2010, Section 3.1.4). Furthermore, with respect to the taxonomy by Ye and Yu (2014), linescan cameras with telecentric lenses are orthographic cameras and not pushbroom cameras.
For the polynomial model, Eqs. (14) and (15) define a polynomial equation system of degree 7 in the unknowns \(x_\mathrm {d}\) and t. Therefore, the equations cannot be solved analytically, i.e., a numerical root finding algorithm must be used to solve them. For the division model, however, an analytical solution is possible. Specializing (14) and (15) to the division model results in the following equation system:
where \(u = 1/(1 + \kappa (x_\mathrm {d}^2 + y_\mathrm {d}^2))\). Since \(y_\mathrm {d} = s_y c_y\) is constant, we solve (18) for t:
Substituting (19) into (17) results in:
If we multiply both sides by \(1 + \kappa (x_\mathrm {d}^2 + y_\mathrm {d}^2)\), expand the terms, and sort them according to powers of \(x_\mathrm {d}\), we obtain:
The term \(x_\mathrm {c}  y_\mathrm {c} v_x / v_y\) is the \({{\varvec{x}}}\) coordinate of the point at which the line \((x_\mathrm {c}, y_\mathrm {c})^{\top } t (v_x, v_y)^{\top }\) intersects the \({{\varvec{x}}}\) axis (or, in 3D, at which the line \((x_\mathrm {c}, y_\mathrm {c}, z_\mathrm {c})^{\top } t (v_x, v_y, v_z)^{\top }\) intersects the \({{\varvec{x}}}{{\varvec{z}}}\) plane). Let us call this term \(x_0\). The term \(1 + \kappa y_\mathrm {d}^2\) represents the inverse of the undistortion factor u for \(x_\mathrm {d} = 0\). Let us call this term \(d_0\). Then, we have:
Hence, for \(\kappa \ne 0\) and \(x_0 \ne 0\), we have:
For \(\kappa = 0\) or \(x_0 = 0\), (22) reduces to a linear equation. We examine both cases in turn. For \(x_0 = 0\), we have:
Inserting the value of \(x_\mathrm {d}\) obtained from (23) or (24) into (19) returns the value of t in both cases. For \(\kappa = 0\), we have:
In this case, Eq. (19) can be simplified to:
Note that for \(\kappa = 0\), \(y_\mathrm {d}\), i.e., \(c_y\), is not meaningful (cf. Remarks 8 and 20). Therefore, if it is known a priori that \(\kappa = 0\), \(c_y\) (and, therefore, \(y_\mathrm {d}\)) should be set to 0, which simplifies the equations even further to:
If there are lens distortions, we can see from (23) that there are two potential solutions for the projection into the image, whereas in the cases without distortion, (24) and (25), there is a unique solution. Intuitively, we expect that for the case with lens distortions, there also is a unique solution since there is only one particular instant of time when the point will appear in front of the sensor line.
Proposition 1
In (23), the correct solution is given by:
Proof
To prove the assertion, we will examine the limit of (23) for \(\kappa \rightarrow 0\). Obviously, this solution must converge to (25) for the correct solution because (17) and (18) are continuous around \(\kappa = 0\). We first examine the solution in (29) and note that both the numerator and denominator converge to 0 for \(\kappa \rightarrow 0\). Therefore, we use L’Hôpital’s rule to compute the limit:
Hence, Eq. (29) converges to (25) for \(\kappa \rightarrow 0\). We now examine the solution of (23) with the plus sign and note that the numerator converges to 2/m for \(\kappa \rightarrow 0\), while the denominator converges to 0. Hence, the second solution converges to \(\infty \) for \(\kappa \rightarrow 0\). Therefore, Eq. (29) is the correct solution. \(\square \)
Remark 13
The optical ray of an image point \((x_\mathrm {i}, y_\mathrm {i})^{\top }\) can be computed as follows. First, Eq. (13) is inverted:
Next, Eqs. (14) and (15) are solved for \((x_\mathrm {c}, y_\mathrm {c})^{\top }\):
where \(y_\mathrm {d} = s_y c_y\). The optical ray is then given by:
Remark 14
In contrast to linescan cameras with entocentric lenses (cf. Remark 12), a pure removal of lens distortions is possible for linescan cameras with telecentric lenses because (14) and (15) do not depend on \(z_\mathrm {c}\). Given an image point \((x_\mathrm {i}, y_\mathrm {i})^{\top }\), the corresponding point \((x_\mathrm {c}, y_\mathrm {c})^{\top }\) in the camera coordinate system can be computed, as described in Remark 13. This point can then be projected into a rectified camera for which all distortion coefficients have been set to 0. Moreover, any skew in the pixels can be removed by setting \(v_x\) to 0 in the rectified camera. Finally, square pixels can be enforced by setting \(s_x\) to \(\min (s_x, m v_y)\) and then setting \(v_y\) to \(s_x / m\). This approach ensures that no aliasing occurs when rectifying the image.
Remark 15
As is the case for linescan cameras with entocentric lenses (cf. Remark 12), a reprojection of the image onto a world plane is possible for linescan cameras with telecentric lenses.
Calibration
Like the camera models that were described in Sect. 3, the camera is calibrated by using the planar calibration object introduced in Steger (2017, Section 9). The calibration object has a hexagonal layout of circular control points. It has been designed in such a way that it can cover the entire field of view. Further advantages of this kind of calibration object are discussed in Steger (2017, Section 9).
Let the known 3D coordinates of the centers of the control points of the calibration object be denoted by \({{\varvec{p}}}_j\) (\(j = 1, \ldots , n_\mathrm {m}\), where \(n_\mathrm {m}\) denotes the number of control points on the calibration object). The user acquires \(n_\mathrm {o}\) images of the calibration object. Let us denote the exterior orientation parameters of the calibration object in image l by \({{\varvec{e}}}_l\) (\(l = 1, \ldots , n_\mathrm {o}\)), the interior orientation parameters of the camera by \({{\varvec{i}}}\), and the projection of a point in the calibration object coordinate system to the image coordinate system by \({\varvec{\pi }}\) (cf. Sect. 4.1). In addition, let \(v_{jl}\) denote a function that is 1 if the control point j of the observation l of the calibration object is visible with the camera, and 0 otherwise. Finally, let \({{\varvec{p}}}_{jl}\) denote the position of control point j in image l. Then, the camera is calibrated by minimizing the following function:
The minimization is performed by a suitable version of the sparse Levenberg–Marquardt algorithms described in Hartley and Zisserman (2003, Appendix A6).
The points \({{\varvec{p}}}_{jl}\) are extracted by fitting ellipses (Fitzgibbon et al. 1999) to edges extracted with a subpixelaccurate edge extractor (Steger 1998b, Chapter 3.3; Steger 2000). As discussed by Steger (2017, Section 5.2) and Mallon and Whelan (2007), this causes a bias in the point positions. Since telecentric linescan cameras perform an orthographic projection, there is no perspective bias, i.e., the bias consists solely of distortion bias. The bias can be removed with the approach for entocentric linescan cameras described by Steger (2017, Section 10).
The optimization of (35) requires initial values for the unknown parameters. Initial values for the interior orientation parameters, except for the motion vector, can be obtained from the specification of the camera and the lens, as described in Remark 1. In contrast to areascan cameras, \(c_y = 0\) is typically used as the initial value. An approximate value for \(v_y\) usually will be known from the considerations that led to the linescan camera setup. Finally, \(v_x\) typically can be set to 0. With known initial values for the interior orientation, the image points \({{\varvec{p}}}_{jl}\) can be transformed into metric coordinates in the camera coordinate system using (31)– (33). This allows us to use the OnP algorithm described by Steger (2018) to obtain estimates for the exterior orientation of the calibration object.
Remark 16
The Levenberg–Marquardt algorithm requires the partial derivatives of \(x_\mathrm {i}\) and \(y_\mathrm {i}\) with respect to the interior and exterior orientation parameters of the camera model. This, in turn, requires the partial derivatives of \(x_\mathrm {d}\) and t with respect to the interior orientation parameters. These can be computed analytically using the implicit function theorem (de Oliveira 2013).
LineScan Cameras with Telecentric Lenses in Projective Geometry
In this section, we will consider telecentric linescan cameras without lens distortions. In this case, Eqs. (27), (28), and (13) can be written as the following calibration matrix:
Since m and \(s_x\) cannot be determined simultaneously (cf. Remark 19), we have removed this overparameterization by using the parameter \(a = m / s_x\). Furthermore, since the principal point is undefined if there are no distortions (see Remark 20), we have used \(c_x = c_y = 0\).^{Footnote 6} The orthographic projection that the telecentric linescan camera performs can be written as:
Finally, the pose of the camera with respect to the world coordinate system can be written as a \(4 \times 4\) homogeneous matrix \({{{\mathbf {\mathtt{{H}}}}}}\) (see Sect. 3.1). Hence, if there are no distortions, linescan cameras with telecentric lenses are affine cameras (Hartley and Zisserman 2003, Chapter 6.3.4) with the following camera matrix:
A general affine camera can be written as the following camera matrix:
where the top left \(2 \times 3\) submatrix must have rank 2 (Hartley and Zisserman 2003, Chapter 6.3.4). \({{{\mathbf {\mathtt{{M}}}}}}\) has eight degrees of freedom: its elements \(m_{ij}\) (\(i = 1,2\), \(j = 1, \ldots , 4\)) (Hartley and Zisserman 2003, Chapter 6.3.4). The camera matrix in (38) also has eight degrees of freedom: a, \(v_x\), \(v_y\), \(t_x\), \(t_y\), \(\alpha \), \(\beta \), and \(\gamma \). Therefore, it is natural to examine whether a general affine camera matrix \({{{\mathbf {\mathtt{{M}}}}}}\) can be decomposed uniquely into the eight parameters of a telecentric linescan camera without lens distortions.
Theorem 1
Every affine camera matrix \({{{\mathbf {\mathtt{{M}}}}}}\) can be decomposed into the eight parameters a, \(v_x\), \(v_y\), \(t_x\), \(t_y\), \(\alpha \), \(\beta \), and \(\gamma \) of a telecentric linescan camera without lens distortions. There is a twofold ambiguity in the decomposition: If a valid decomposition of \({{{\mathbf {\mathtt{{M}}}}}}\) is given by \((a, v_x, v_y, t_x, t_y, \alpha , \beta , \gamma )\), another valid decomposition is given by \((a, v_x, v_y, t_x, t_y, \alpha + \pi , \beta , \gamma )\).
Proof
To prove Theorem 1, we will make use of the dual image of the absolute conic (DIAC) (Hartley and Zisserman 2003, Chapter 8.5), given by
where \({{{\mathbf {\mathtt{{Q}}}}}}_\infty ^* = {\mathrm {diag}}(1,1,1,0)\) is the canonical form of the absolute dual quadric (Hartley and Zisserman 2003, Chapter 3.7).^{Footnote 7} This will allow us to remove the exterior orientation from the equations to be solved. If we denote the entries of a camera matrix \({{{\mathbf {\mathtt{{M}}}}}}\) by \(m_{ij}\), the elements \(\omega _{ij}\) of the DIAC \({\varvec{\omega }}^{*}\) are given by
Note that \({\varvec{\omega }}^{*}\) is a symmetric matrix.
Let us denote the DIAC of \({{{\mathbf {\mathtt{{M}}}}}}\) by \({\varvec{\omega }}^{*}_{{{\mathbf {\mathtt{{M}}}}}}\) and the DIAC of the camera matrix (38) by \({\varvec{\omega }}^{*}_{{{\mathbf {\mathtt{{P}}}}}}\). We require that both DIACs are identical:
The DIAC \({\varvec{\omega }}^{*}_{{{\mathbf {\mathtt{{P}}}}}}\) is given by:
Hence, we have the following three equations to determine a, \(v_x\), and \(v_y\):
We can solve (46) for \(v_y\):
Substituting \(v_y\) into (45) and solving for \(v_x\) results in:
Substituting \(v_x\) and \(v_y\) into (44) and solving for a yields:
We can assume that \(a = m / s_x\) is positive. Hence, only the positive square root in (49) yields a valid result:
Substituting a into (48) results in:
Note that \(\omega _{11} > 0\) and \(\omega _{22} > 0\) because \({{{\mathbf {\mathtt{{M}}}}}}\) has rank 3 and that \(\omega _{11} \omega _{22}  \omega _{12}^2 \ge 0\) (and, therefore, \(\omega _{11}  \omega _{12}^2 / \omega _{22} \ge 0\)) because of the CauchySchwarz inequality (Steger 2017, Appendix A.1). Consequently, all equations can always be solved.
The above derivation shows that that there are two solutions: \((a, v_x, v_y)\) and \((a, v_x, v_y)\), where we have selected \(v_y = 1 / \sqrt{\omega _{22}}\). These solutions allow us to compute the respective calibration matrix \({{{\mathbf {\mathtt{{K}}}}}}\). If we compute \({{{\mathbf {\mathtt{{K}}}}}}^{1} {{{\mathbf {\mathtt{{M}}}}}}\), we obtain the first two rows of \({{{\mathbf {\mathtt{{H}}}}}}\) (Steger 2017, Appendix A.1). The left \(2 \times 3\) submatrix of \({{{\mathbf {\mathtt{{K}}}}}}^{1} {{{\mathbf {\mathtt{{M}}}}}}\) contains the first two rows of the rotation matrix of the pose. The third row of the rotation matrix can be computed as the vector product of the first two rows. This rotation matrix can then be decomposed into the parameters \(\alpha \), \(\beta \), and \(\gamma \). The right \(2 \times 1\) submatrix of \({{{\mathbf {\mathtt{{K}}}}}}^{1} {{{\mathbf {\mathtt{{M}}}}}}\) contains \(t_x\) and \(t_y\).
We now examine what effect the two different solutions for the interior orientation parameters have on the pose. The two different calibration matrices are given by:
Their inverses are given by:
Since the only difference between the two inverses is the sign of the element (2, 2), this means that when \({{{\mathbf {\mathtt{{H}}}}}}_{1,2}\) are computed as \({{{\mathbf {\mathtt{{K}}}}}}_{1,2}^{1} {{{\mathbf {\mathtt{{M}}}}}}\), the two solutions \({{{\mathbf {\mathtt{{H}}}}}}_1\) and \({{{\mathbf {\mathtt{{H}}}}}}_2\) will have inverse signs in their second row. If the third row of the rotation matrix is computed by the vector product, the two rotation matrices \({{{\mathbf {\mathtt{{R}}}}}}_1\) and \({{{\mathbf {\mathtt{{R}}}}}}_2\) will differ by having inverse signs in their second and third rows. This corresponds to a multiplication by a matrix \({{{\mathbf {\mathtt{{Q}}}}}} = {\mathrm {diag}}(1, 1, 1)\) on the left. Note that \({{{\mathbf {\mathtt{{Q}}}}}}\) is a rotation by \(\pi \) around the \({{\varvec{x}}}\) axis. Since in our Euler angle representation the rotation around the \({{\varvec{x}}}\) axis is performed last (see Sect. 3.1), multiplying by \({{{\mathbf {\mathtt{{Q}}}}}}\) on the left corresponds to adding \(\pi \) to \(\alpha \). This shows that for the first solution \((a, v_x, v_y)\) of the interior orientation parameters, the solution for the pose parameters is given by \((t_x, t_y, \alpha , \beta , \gamma )\), while for the second solution \((a, v_x, v_y)\) of the interior orientation parameters, the solution for the pose parameters is given by \((t_x, t_y, \alpha + \pi , \beta , \gamma )\). \(\square \)
Remark 17
A rotation by \(\pi \) around the \({{\varvec{x}}}\) axis corresponds to looking at the front or at the back of an object. Therefore, if the camera is acquiring images of opaque objects, it is typically possible to select the correct solution. However, if images of transparent objects are acquired, which is the case in applications with backlight illumination (at least the calibration object must be transparent in this case), the ambiguity of Theorem 1 can occur in practice.
Remark 18
Theorem 1 shows that every affine camera is equivalent to a telecentric linescan camera with no distortions. On the other hand, Theorem 1 in Steger (2017) shows that every affine camera is equivalent to an areascan camera with a bilateral telecentric tilt lens with no distortions. Therfore, telecentric linescan cameras with no distortions are equivalent to telecentric areascan cameras (with tilt lenses if the pixels are skewed). This means that we can replace telecentric linescan cameras with no distortions by telecentric areascan cameras with no distortions if this is convenient. In particular, this allows us to reuse existing algorithms for telecentric areascan cameras for telecentric linescan cameras.
We note that a telecentric linescan camera that has been rectified with the approach described in Remark 14 fulfills the above criterion of having no distortions. Therefore, a rectified telecentric linescan camera with parameters \(m_\mathrm {l} = m\), \(s_{x,\mathrm {l}} = s_x\), \(s_{y,\mathrm {l}} = s_y\), \(c_{x,\mathrm {l}} = c_x\), \(c_{y,\mathrm {l}} = c_y\), \(v_{x,\mathrm {l}} = 0\), and \(v_{y,\mathrm {l}} = v_y = s_x / m\) can be represented by a telecentric areascan camera (cf. Sect. 3.1) with \(m_\mathrm {a} = m\), \(s_{x,\mathrm {a}} = s_x\), \(s_{y,\mathrm {a}} = s_x\), \(c_{x,\mathrm {a}} = c_x\), and \(c_{y,\mathrm {a}} = c_y s_x / s_y\).
Model Degeneracies
Remark 19
The model for telecentric linescan cameras is overparameterized. The values of m and \(s_x\) cannot be determined simultaneously. This can be solved by fixing \(s_x\) at the initial value that was specified by the user. Furthermore, like for entocentric linescan cameras, \(s_y\) is only used to specify the principal point in pixels (cf. Remark 7) and is therefore kept fixed at the initial value specified by the user.
Remark 20
Like for telecentric areascan cameras (cf. Remark 2), \((c_x, c_y)^{\top }\) is solely defined by the lens distortions for telecentric linescan cameras. If there are no lens distortions, \((c_x, c_y)^{\top }\) and \((t_{l,x}, t_{l,y})^{\top }\) have the same effect. Therefore, in this case \((c_x, c_y)^{\top }\) should remain fixed at the initial value specified by the user (typically, \(c_x\) is set to the horizontal image center and \(c_y\) is set to 0).
Remark 21
Like for entocentric linescan cameras (cf.Remark 10), the parameters \(P_1\) and \(P_2\) of the polynomial distortion model are highly correlated with other parameters of the telecentric linescan camera model, especially if the radial distortions are small. Therefore, they typically cannot be determined reliably. Consequently, in practice, they should be set to 0 and should be excluded from the calibration.
Remark 22
Neither \(t_z\) nor \(v_z\) can be determined since they have no effect on the projection (cf. Sect. 4.1). We leave \(v_z\) at the initial value specified by the user and set \(t_z\) to 1 m (see also Remark 3).
Remark 23
As described in Theorem 1 and Remark 17, there is a sign ambiguity for \(v_y\). Therefore, the user must specify the initial value of \(v_y\) with the correct sign to ensure the calibration converges to the correct solution.
Remark 24
Like for telecentric areascan cameras (see Remark 4), the rotation of the pose of a planar calibration object can only be determined up to a twofold ambiguity. This is a special case of a Necker reversal (Shapiro et al. 1995, Section 4.1) when the object is planar. The two sets of pose parameters \((\alpha _l, \beta _l, \gamma _l)\) and \((\alpha _l, \beta _l, \gamma _l)\) (with identical translation vectors) result in the same points in the image. If a correct exterior orientation of the calibration object is required in the application, the user must resolve this ambiguity by selecting the correct pose based on prior knowledge.
Proposition 2
For a planar calibration object, if \(c_y = 0\), or if \(c_y \ne 0\) and there are no distortions (i.e., \(\kappa = 0\) for the division model), and if \(\beta = 0\), then \(v_y\), \(\alpha \), and \(t_y\) cannot be determined simultaneously.
Proof
Without loss of generality, we can assume that the planar calibration object lies in the plane \(z = 0\) in the calibration object coordinate system. Let us first assume we have cameras with \(c_y = 0\). Furthermore, let us assume that the first camera has \(\alpha _1 = 0\), \(t_{y,1} = t_y\), and \(v_{y,1} = v_y\), where \(t_y\) and \(v_y\) are arbitrary but fixed. In addition, let us assume the remaining interior and exterior orientation parameters m, \(c_x\), \(s_x\), \(s_y\), \(v_x\), \(t_x\), and \(\gamma \) are arbitrary and identical for both cameras. We now can select an arbitrary value \(\alpha \) for the rotation around the \({{\varvec{x}}}\) axis for the second camera, i.e., \(\alpha _2 = \alpha \). Then, by setting \(v_{y,2} = v_y \cos \alpha \) and \(t_{y,2} = t_y \cos \alpha \), we obtain a camera with identical projection geometry for points in the plane \(z = 0\). If \(c_y \ne 0\) and if there are no distortions, we must also set \(c_{y,1} = c_y\) and \(c_{y,2} = c_y \cos \alpha \).
To prove that both cameras result in the same projection geometry, we can construct their camera matrices \({{{\mathbf {\mathtt{{P}}}}}}_{1,2}\), as described in Sect. 4.3. Then, we can project an arbitrary point \({{\varvec{p}}}_\mathrm {c} = (x_\mathrm {c}, y_\mathrm {c}, 0)^{\top }\), i.e., we can compute \({{\varvec{p}}}_{\mathrm {i},1,2} = {{{\mathbf {\mathtt{{P}}}}}}_{1,2} {{\varvec{p}}}_\mathrm {c}\). Comparing the resulting expressions for \({{\varvec{p}}}_{\mathrm {i},1,2}\), which we omit here, shows that they are identical. \(\square \)
Proposition 3
For a planar calibration object, if \(c_y = 0\), or if \(c_y \ne 0\) and there are no distortions (i.e., \(\kappa = 0\) for the division model), and if \(\alpha = 0\), then m, \(v_x\), \(\beta \), and \(t_x\) cannot be determined simultaneously.
Proof
Without loss of generality, we can assume that the planar calibration object lies in the plane \(z = 0\) in the calibration object coordinate system. Let us first assume we have cameras with \(c_y = 0\). Furthermore, let us assume that the first camera has \(\beta _1 = 0\), \(t_{x,1} = t_x\), \(m_1 = m\), and \(v_{x,1} = v_x\), where \(t_x\), m, and \(v_x\) are arbitrary but fixed. In addition, let us assume the remaining interior and exterior orientation parameters \(c_x\), \(s_x\), \(s_y\), \(v_y\), \(t_y\), and \(\gamma \) are arbitrary and identical for both cameras. We now can select an arbitrary value \(\beta \) for the rotation around the \({{\varvec{y}}}\) axis for the second camera, i.e., \(\beta _2 = \beta \). Then, by setting \(m_2 = m / \cos \beta \), \(v_{x,2} = v_x \cos \beta \), and \(t_{x,2} = t_x \cos \beta \), we obtain a camera with identical projection geometry for points in the plane \(z = 0\). If \(c_y \ne 0\) and if there are no distortions, we must also set \(c_{y,1} = c_y\) and \(c_{y,2} = c_y / \cos \beta \).
To prove the assertion, we can proceed in the same manner as in the proof of Proposition 2. \(\square \)
Remark 25
Proposition 2 shows that a rotation of a planar calibration object around the \({{\varvec{x}}}\) axis can be exchanged with different values for the speed \(v_y\) and the translation \(t_y\). Proposition 3 shows that a rotation of a planar calibration object around the \({{\varvec{y}}}\) axis can be exchanged with different values for the magnification m, the speed \(v_x\), and the translation \(t_x\). Since the interior and exterior orientation parameters that are affected by these degeneracies are independent of each other, we conjecture that there is a universal degeneracy that implies that the interior and exterior orientation parameters cannot be determined from a single image of a planar calibration object, no matter how the calibration object is oriented in 3D. We prove that this is the case in the following theorem.
Theorem 2
For a single image of a planar calibration object, m, \(v_x\), \(v_y\), \(\alpha \), \(\beta \), \(t_x\), and \(t_y\) cannot be determined simultaneously if \(c_y = 0\), or if \(c_y \ne 0\) and there are no distortions (i.e., \(\kappa = 0\) for the division model).
Proof
Without loss of generality, we can assume that the planar calibration object lies in the plane \(z = 0\). Furthermore, we can immediately see that a rotation around the \({{\varvec{z}}}\) axis is immaterial since a rotation of a point in the plane \(z = 0\) by an angle \(\gamma \) around the \({{\varvec{z}}}\) axis merely corresponds to a different point in the plane \(z = 0\). Consequently, we can use \(\gamma = 0\) in the following.
We will now show that a camera with parameters m, \(s_x\), \(s_y\), \(c_x = 0\), \(c_y = 0\), \(v_x\), \(v_y\), \(t_x\), \(t_y\), \(\alpha \), and \(\beta \) leads to an identical projection as a camera with parameters \(m / \cos \beta \), \(s_x\), \(s_y\), \(c_x = 0\), \(c_y = 0\), \(v_x \cos \beta \), \(v_y f\), \(t_x \cos \beta \), \(t_y f\), \(\phi \), and \(\psi \) for a suitably chosen factor f and suitably chosen angles \(\phi \) and \(\psi \). We will only examine the case of no lens distortions since for \(c_y = 0\) the distortion is purely along the horizontal direction of the image and therefore can be rectified within each line independently.
To prove that the two camera parameter sets above result in the same projection, we will construct the affine transformation matrix \({{{\mathbf {\mathtt{{A}}}}}}\) of points in the plane \(z = 0\) in the calibration object coordinate system to points in the image plane. The affine transformation is given by multiplying the camera matrix in (38) by \({{{\mathbf {\mathtt{{O}}}}}}^{\top }\) from the right, where \({{{\mathbf {\mathtt{{O}}}}}}\) is given by (37). Hence, we have:
For the first set of camera parameters, this results in:
where \(a = m / s_x\). For the second set of camera parameters, we obtain:
Hence, we can see that the translation part of \({{{\mathbf {\mathtt{{A}}}}}}_1\) and \({{{\mathbf {\mathtt{{A}}}}}}_2\) (their last columns) are identical. This leaves us with the following four equations for f, \(\phi \), and \(\psi \):
We solve (62) for f and obtain:
By substituting (63) into (60), we obtain an equation that is fulfilled tautologically. Hence, we substitute (63) into (61) and simplify to obtain:
which we solve for \(\tan \phi \):
By substituting (63) into (59) and simplifying, we obtain:
By substituting (65) into (66), we obtain:
Hence, we have:
where \(\cos ^2 \beta \) is an abbreviation for \((\cos \beta )^2\). Thus:
By substituting (68) into (65) and using the identity \(\sin \theta = \sqrt{1  \cos ^2 \theta }\), we obtain:
Therefore, we have:
Finally, by substituting (71) into (63) and using the identity \(\cos ( \arctan \theta ) = 1 / \sqrt{\theta ^2 +1}\), we obtain:
To extend the proof to the case, \((c_x, c_y)^{\top }\ne (0,0)^{\top }\), we note (see Footnote 6) that \(c_{y,1}' = c_{y,1} s_y / (m v_y)\) for the first camera and \(c_{y,2}' = c_{y,2} s_y \cos \beta / (m v_y f)\) for the second camera, whence \(c_{y,2} = c_{y,1} (f / \cos \beta )\). Furthermore, for the first camera, \(c_{x,1}' = c_{y,1} (s_y / s_x) (v_x / v_y) + c_{x,1}\), while for the second camera, \(c_{x,2}' = c_{y,2} (s_y / s_x) (v_x \cos \beta ) / (v_y f) + c_{x,2}\). Substituting \(c_{y,2} = c_{y,1} (f / \cos \beta )\) into the last equation shows that \(c_{x,2} = c_{x,1}\). \(\square \)
Remark 26
Table 1 displays an example for interior and exterior orientation parameters that result in identical projections for planar objects in the plane \(z = 0\) that were obtained with solution in the proof of Theorem 2.
Remark 27
From the proof of Theorem 2, it might appear that there is only a twofold ambiguity. However, this is only caused by the fact that we have chosen the specific values of \(m / \cos \beta \), \(v_x \cos \beta \), and \(t_x \cos \beta \) for the second camera to simplify the proof as much as possible. If other factors instead of \(\cos \beta \) had been chosen, the values of f, \(\phi \), and \(\psi \) would change accordingly. Therefore, like in Propositions 2 and 3, the degeneracy is completely generic.
Corollary 1
The results above show that telecentric linescan cameras cannot be calibrated from a single image of a planar calibration object. Consequently, multiple images of a planar calibration object with different exterior orientation must be used to calibrate the camera if all parameters are to be determined unambiguously.
Remark 28
In machine vision applications, it sometimes is important to calibrate the camera from a single image. As the above discussion shows, this will lead to camera parameters that differ from their true values. However, if the residual error of the calibration is sufficiently small, a camera geometry that is consistent within the plane that is defined by the exterior orientation of the calibration object (and all planes parallel thereto) will be obtained. Therefore, an image or features extracted from an image can be rectified to this plane (cf. Remark 15). On the other hand, algorithms that solely rely on the interior orientation, e.g., the pure removal of radial distortions in Remark 14, are less useful because the ambiguities with respect to m and \(v_y\) imply that we cannot reliably undistort an image or features extracted from an image to have square pixels.
Calibration Experiments
Robustness of Principal Point Estimation
In our first experiment, we evaluate the importance of modeling \(c_y\) and lens distortions in general. We mounted an areascan camera with a telecentric lens approximately 30 cm above a linear stage. The camera was oriented such that its viewing direction was vertically downwards onto the linear stage and its \({{\varvec{y}}}\) axis was approximately parallel to the linear motion of the stage. An encoder that triggered the image acquisition was used to ensure a constant speed. We acquired an image at each trigger event and saved the obtained image array. We restricted the part that was read out from the sensor to the center image rows only: we selected the 90 sensor rows above and below the center image row, resulting in images of height 181. This setup enabled us to generate images of a virtual linescan camera that consist of one of the 181 image rows. The linescan image for one selected row was obtained by stacking the selected image row of all images in the array on top of each other. The frequency of the encoder was chosen such that the pixels in the generated linescan images were approximately square.
For the tests, we chose two different hardware setups. In the first setup, we used an IDS GV5280CPCHQ color camera (2/3 inch sensor size, CMOS, 3.45 \(\upmu \hbox {m}\) pixel pitch, \(2448 \times 2048\)) with a telecentric Vicotar T201/0.19 lens (nominal magnification: 0.19). We set up the camera, which uses a color filter array to capture color information, to directly return grayscale images. The generated linescan images are of size \(2448 \times 3330\).
In the second setup, we used an IDS UI3080CPMGL monochrome camera (2/3 inch sensor size, CMOS, 3.45 \(\upmu \hbox {m}\) pixel pitch, \(2456 \times 2054\)) with a telecentric V.S. Technologies LVSTC017 lens (nominal magnification: 0.17). Because the lens was designed for a maximum sensor size of 1/2 inch, we cropped the images to 70% of their width. The generated linescan images are of size \(1719 \times 2954\).
With both setups, we acquired 16 image arrays of a \(4 \times 3\) cm\(^2\) planar calibration object with a hexagonal layout of circular marks in different poses, as described in Steger et al. (2018, Chapter 3.9.4.1). For each of the 181 image rows, we generated 16 virtual linescan images, one from each of the 16 image arrays. Consequently, for each of the 181 image rows, we obtained 16 calibration images that we used for calibration by minimizing (35). Lens distortions were taken into account by applying the division model. The variation of the resulting camera parameters depending on the selected sensor row is shown in Figs. 2 and 3. In addition, the variation of the root mean square (RMS) calibration error, i.e., \(\sqrt{\varepsilon ^2 / \sum _{l=1}^{n_{\mathrm {o}}} \sum _{j=1}^{n_{\mathrm {m}}} v_{jl}}\), is plotted.
From the plots, it can be seen that all parameters except for \(c_y\) do not change substantially when selecting different sensor rows (note the scaling of the vertical axes). This is the expected behavior because \(c_y\) measures the misalignment of the sensor line with respect to the optical axis in the \({{\varvec{y}}}\) direction (see Sect. 3.2). Therefore, there is a linear relationship between the selected sensor row and \(c_y\), which is evident from the plots. While the expected change of \(c_y\) over the sensor rows is 181, it is 182 for the color camera but only 131 for the monochrome camera. Although we are not perfectly sure, we assume that the reason for \(c_y\) changing only by 131 pixels is that the circular actuator of the linear stage is not perfectly centered, resulting in a nonuniform movement (see Sect. 5.2 for more details). Because of the different magnifications of the lenses, for the color camera, there seems to be an integer number of periods of the movement bias in the images, whereas for the monochrome camera there seems to be a noninteger number of periods. Since \(c_y\) is primarily determined by the bending of the image lines, an asymmetry in the movement bias in the images may prevent the reliable extraction of \(c_y\) for the monochrome camera.
For increasing sensor rows, the values for m and \(\kappa \) increase slightly for the color camera and decrease slightly for the monochrome camera. This is because a larger magnification can be at least partially compensated by a larger value of \(\kappa \), which causes a nonuniform scaling in the image row, and vice versa.
Also note that the RMS error of the color camera is significantly larger than that of the monochrome camera, which probably is caused by image artifacts that are caused by the color filter array.
Despite the relatively small lens distortions of at most 1.8 pixels in the images, \(c_x\) and \(c_y\) are estimated consistently over different sensor rows. Nevertheless, the standard deviations of the estimated principal point are significantly larger for the color camera than for the monochrome camera because the magnitude of \(\kappa \) is larger by a factor of almost 3 for the monochrome camera. Consequently, the principal point of the monochrome camera is better defined (see Remark 20).
We alternatively applied the polynomial distortion model to the calibration of the monochrome camera. As suggested in Remark 21, we set \(P_1\) and \(P_2\) to 0 and excluded them from the calibration. The results are shown in Fig. 4. The obtained values for the parameters are very similar to those obtained with the division model. The RMS error decreases only marginally from 0.3318 (division) to 0.3316 (polynomial), on average. As for the division model, a correlation between m and the radial lens distortion parameters (\(K_i\)) is observable. The experiment shows that the division model obviously represents the lens distortions sufficiently well and that both models return consistent results.
These experiments show that the proposed camera model is able to estimate the principal point of the camera accurately, and hence is able to handle lens distortions effectively, even for lenses with only small lens distortions.
Calibration of LineScan Cameras
In the second experiment, we calibrated two monochrome Basler raL204848gm linescan cameras (14.3 mm sensor size, CMOS, 7.0 \(\upmu \hbox {m}\) pixel pitch, \(2048 \times 1\)) with Opto Engineering telecentric lenses. On the first camera, we mounted a TC2MHR048F (nominal magnification: 0.268, working distance: 133 mm) and on the second camera, we mounted a TC2MHR058F (nominal magnification: 0.228, working distance: 158 mm). We used the same setup as described in Sect. 5.1 and acquired 16 images of a \(8 \times 6\) cm\(^2\) planar calibration object in different poses. Each calibration was performed with the division and polynomial distortion models. The results are shown in Tables 2 and 3.
The low RMS errors indicate that the cameras and lenses can be represented very accurately by our proposed model. Both highquality lenses have very small lens distortions. This also causes the principal points to be poorly defined, resulting in significantly different values for \(c_y\) for the division and polynomial distortion models in Table 2. When setting \(c_x=1024\), \(c_y=0\), all distortion parameters to 0, and excluding these parameters from the calibration, the RMS errors only marginally increase to 0.3545 (division model) and 0.3541 (polynomial model) for the TC2MHR048F lens and to 0.2928 (division model) and 0.2927 (polynomial model) for the TC2MHR058F lens. Nevertheless, the maximum absolute distortion in the images is approximately 1.2 pixels (TC2MHR048F) and 1.4 pixels (TC2MHR058F), which would decrease the accuracy of measurements when being ignored.
The small distortions raise the question whether any of the parameters in the model are redundant for these lenses, i.e., whether overfitting has occurred. Obviously, m, \(v_x\), and \(v_y\) are significant geometrically and therefore cannot be omitted. Hence, the question is whether \((c_x, c_y)\) or the distortion parameters \(\kappa \) or \(K_1\), \(K_2\), and \(K_3\) are significant. We use the significance test proposed by Grün (1978) to test whether any useful combinations of these parameters are significant. Each test hypothesis was obtained by setting the respective distortion parameters to 0 and the principal point to the center of the sensor line. In Table 4, we display the results of this test for the TC2MHR058F lens. The results for the TC2MHR048F lens are omitted because they are similar. As can be seen from Table 4, all distortionrelated parameters are highly significant. Therefore, no overfitting occurs, even for these small distortions.
Figure 5 shows one of the calibration images that were used to calibrate the camera with the TC2MHR048F lens. In addition, the residuals (scaled by a factor of 130) are visualized for each circular calibration mark. The residuals are the differences between the extracted centers of the calibration marks in the image and the projections of the corresponding points on the calibration object into the image. The projection is performed by using the calibrated camera parameters of the interior and exterior orientation while applying polynomial distortion model. The mean and maximum length of the residuals was 0.284 pixels and 0.538 pixels, respectively. This corresponds to 7.21\(\upmu \hbox {m}\) and 13.67\(\upmu \hbox {m}\) in the world. It can be seen that the predominant part of the residuals is a systematic periodic error in the direction of the movement, i.e., in vertical direction in the image. It should be noted that the used encoder reacts to the angle position of the electric motor of the linear stage. Therefore, we assume that the major part of the residuals is caused by the circular actuator that is not perfectly centered. Another indication of this assumption is the fact that the periodicity of the error corresponds to one full revolution of the actuator. In this case, the calibration error could be further reduced by using a higherquality electric actuator that better realizes a constant speed. In comparison, the residuals in horizontal direction are very small, which again shows that the proposed camera model represents the true projection very well.
Example Application: Image Rectification
To be able to precisely measure distances and angles in the image, the obtained linescan images must be rectified to eliminate lens distortions and skew to ensure square pixels. While this is generally impossible for entocentric lenses (see Remark 12), we can perform such a rectification for telecentric lenses (see Remark 14).
For the example shown in Fig. 6, we acquired a linescan image of a graph paper with the setup described in Sect. 5.2 with the TC2MHR058F lens. The acquired image is shown in Fig. 6a. Because the motion direction was not perfectly perpendicular to the sensor line, i.e., has a significant nonzero motion component in the direction of the sensor line (\(v_x=1.0491\) \(\upmu \hbox {m}\) Pixel\(^{1}\), see Table 3), the squares of the graph paper appear skewed. Furthermore, because the speed of the motion was not perfectly adjusted, rectangular instead of square pixels are obtained, causing a nonuniform scaling of the squares on the graph paper in the image.
By setting all distortion coefficients to 0, \(v_x\) to 0, \(s_x\) to \(\min (s_x, m v_y)\), and \(v_y\) to \(s_x / m\) (see Remark 14), we can generate an image mapping that rectifies the images acquired with the setup. After the rectification, the images have no lens distortions and square pixels. Fig. 6b shows the resulting rectified image. The squares on the graph paper are squares in the rectified image. Hence, in the rectified image, it is possible to measure angles and distances and areas in world units correctly.
For a quantitative comparison, we extracted subpixelprecise lines (Steger 1998a, b, 2013) in the original and in the rectified image, fitted straight lines, and computed their intersection angles. The mean over all angles was 87.971\(^\circ \) in the original image and 90.004\(^\circ \) in the rectified image. Furthermore, we computed the area of each square on the graph paper in both images and transformed it to metric units by multiplying it by \(s_x s_y / m^2\). Here, \(s_x\) and \(s_y\) are the pixel pitches on the sensor and m is the lens magnification obtained from calibration. The mean area was 25.50 mm\(^2\) in the original image and 24.98 mm\(^2\) in the rectified image, while the actual size of the squares on the paper was 25 mm\(^2\).
Conclusions
We have proposed a camera model for linescan cameras with telecentric lenses. The model assumes a linear motion of the camera with constant velocity. It can model general lens distortions by allowing the sensor line to lie anywhere with respect to the optical axis. The model is parameterized by camera parameters that have a physical meaning and is therefore easy to understand by machine vision users. We have described an algorithm to calibrate the camera model using a planar calibration object. Furthermore, we have investigated the degeneracies of the model and have discussed how they can be handled in practice. In addition, we have shown that every affine camera can be interpreted as a telecentric linescan camera and vice versa, provided the lens does not exhibit any lens distortions. Experiments with real setups have been used to establish the validity of the model. In particular, we have shown that even for lenses with very small lens distortions, the distortions statistically are highly significant and therefore cannot be omitted in realworld applications.
One direction for future research is to derive an explicit stereo or multiview camera model for telecentric linescan cameras.
Notes
If the speed of the motion vector \({{\varvec{v}}}\) is specified in world units (i.e., m s\(^{1}\)), it must be divided by the readout rate of the camera (units: Pixel s\(^{1}\), i.e., scan lines per second) to obtain the motion vector in meters per scan line.
This corresponds to the point \({{\varvec{p}}}_l\) in (1).
The above definition of the principal point implies that points on the sensor line have a \({{\varvec{y}}}\) coordinate of 0.
One could also argue that the motion vector \({{\varvec{v}}}\) logically belongs to the exterior orientation of the camera. However, as (12) shows, the motion vector affects the projection of a point given in the camera coordinate system. Therefore, we include it in the interior orientation.
A principal point \((c_x, c_y)^{\top }\ne (0,0)^{\top }\) would lead to \({{{\mathbf {\mathtt{{K}}}}}}_{1,3} = c_y (s_y / s_x) (v_x /v_y) + c_x\) because of (25) and (13) and to \({{{\mathbf {\mathtt{{K}}}}}}_{2,3} = c_y s_y / (m v_y)\) because of (26). We will refer to these values as \(c_x'\) and \(c_y'\) where necessary.
The function \({\mathrm {diag}}\) constructs a diagonal matrix with the specified elements.
References
Beyerer, J., Puente León, F., & Frese, C. (2016). Machine vision: Automated visual inspection: Theory, practice and applications. Berlin: Springer.
Blahusch, G., Eckstein, W., Steger, C., & Lanser, S. (1999). Algorithms and evaluation of a high precision tool measurement system. In 5th International conference on quality control by artificial vision (pp. 31–36).
Brown, D. C. (1966). Decentering distortion of lenses. Photogrammetric Engineering, 32(3), 444–462.
Brown, D. C. (1971). Closerange camera calibration. Photogrammetric Engineering, 37(8), 855–866.
Chen, T., Shibasaki, R., & Shunji, M. (2003). Development and calibration of the airborne threeline scanner (TLS) imaging system. Photogrammetric Engineering and Remote Sensing, 69(1), 71–78.
de Oliveira, O. (2013). The implicit and the inverse function theorems: Easy proofs. Real Analysis Exchange, 39(1), 207–218.
Donné, S., Luong, H., Dhondt, S., Wuyts, N., Inzé, D., Goossens, B., & Philips, W. (2017). Robust planebased calibration for linear cameras. In International conference on image processing (pp. 36–40).
Draréni, J., Roy, S., & Sturm, P. (2011). Planebased calibration for linear cameras. International Journal of Computer Vision, 91(2), 146–156.
Fitzgibbon, A., Pilu, M., & Fisher, R. B. (1999). Direct least square fitting of ellipses. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(5), 476–480.
Fitzgibbon, A. W. (2001). Simultaneous linear estimation of multiple view geometry and lens distortion. In IEEE conference on computer vision and pattern recognition (Vol. I, pp. 125–132).
Grün, A. (1978). Accuracy, reliability and statistics in closerange photogrammetry. In Intercongress Symposium, Commission V, International Society for Photogrammetry, Stockholm, unbound paper No. 9.
Gupta, R., & Hartley, R. I. (1997). Linear pushbroom cameras. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(9), 963–975.
Haala, N., Stallmann, D., & Cramer, M. (1998). Calibration of directly measured position and attitude by aerotriangulation of threeline airborne imagery. In T. Schenk & A. Habib (Eds.), ISPRS Commission III symposium on object recognition and scene classification from multispectral and multisensor pixels (pp. 23–30).
Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision (2nd ed.). Cambridge: Cambridge University Press.
Horaud, R., Mohr, R., & Lorecki, B. (1993). On singlescanline camera calibration. IEEE Transactions on Robotics and Automation, 9(1), 71–75.
Hui, B., Wen, G., Zhang, P., & Li, D. (2013). A novel line scan camera calibration technique with an auxiliary frame camera. IEEE Transactions on Instrumentation and Measurement, 62(9), 2567–2575.
Hui, B., Wen, G., Zhao, Z., & Li, D. (2012a). Linescan camera calibration in closerange photogrammetry. Optical Engineering, 51(5), 0536021–05360212.
Hui, B., Zhong, J., Wen, G., & Li, D. (2012b). Determination of line scan camera parameters via the direct linear transformation. Optical Engineering, 51(11), 1132011–1132016.
Lanser, S. (1997). Modellbasierte Lokalisation gestützt auf monokulare Videobilder. Dissertation, Forschungs und Lehreinheit Informatik IX, Technische Universität München.
Lanser, S., Zierl, C., & Beutlhauser, R. (1995). Multibildkalibrierung einer CCDKamera. In G. Sagerer, S. Posch, & F. Kummert (Eds.), Mustererkennung (pp. 481–491). Informatik aktuell: Springer.
Lee, C., Theiss, H. J., Bethel, J. S., & Mikhail, E. M. (2000). Rigorous mathematical modeling of airborne pushbroom imaging systems. Photogrammetric Engineering and Remote Sensing, 66(4), 385–392.
Lenhardt, K. (2017). Optical systems in machine vision. In A. Hornberg (Ed.), Handbook of machine and computer vision (2nd ed., pp. 179–290). Weinheim: Wiley.
Lenz, R. (1987). Linsenfehlerkorrigierte Eichung von Halbleiterkameras mit Standardobjektiven für hochgenaue 3DMessungen in Echtzeit. In E. Paulus (Ed.), Mustererkennung (Vol. 149, pp. 212–216). Berlin: Springer.
Lenz, R. (1988). Viedeometrie mit CCDSensoren und ihre Anwendung in der Robotik. Habilitationsschrift, Lehrstuhl für Nachrichtentechnik der Technischen Universität München.
Lenz, R., & Fritsch, D. (1990). Accuracy of videometry with CCD sensors. ISPRS Journal of Photogrammetry and Remote Sensing, 45(2), 90–110.
Lilienblum, E., AlHamadi, A., & Michaelis, B. (2013). A coded 3D calibration method for linescan cameras. In J. Weickert, M. Hein, & B. Schiele (Eds.), Pattern recognition (Vol. 8142, pp. 81–90). Lecture notes in computer science Berlin: Springer.
Luna, C. A., Mazo, M., Lázaro, J. L., & Vázquez, J. F. (2010). Calibration of linescan cameras. IEEE Transactions on Instrumentation and Measurement, 59(8), 2185–2190.
Luster, S. D., & Batchelor, B. G. (2012). Telecentric, fresnel and micro lenses. In B. G. Batchelor (Ed.), Machine vision handbook (pp. 259–281). London: Springer.
Mallon, J., & Whelan, P. F. (2007). Which pattern? Biasing aspects of planar calibration patterns and detection methods. Pattern Recognition Letters, 28(9), 921–930.
MVTec Software GmbH. (2005a). HALCON application note: Machine vision in world coordinates. HALCON Version 7.1.
MVTec Software GmbH. (2005b). HALCON/HDevelop reference manual. HALCON Version 7.1.
Niu, M., Song, K., Wen, X., Zhang, D., & Yan, Y. (2018). The line scan camera calibration based on space rings group. IEEE Access, 6, 23711–23721.
Poli, D. (2007). A rigorous model for spaceborne linear array sensors. Photogrammetric Engineering and Remote Sensing, 73(2), 187–196.
Ramalingam, S., & Sturm, P. (2017). A unifying model for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(7), 1309–1319.
Ramalingam, S., Sturm, P., & Lodha, S. K. (2006). Theory and calibration for axial cameras. In P. J. Narayanan, S. K. Nayar, & H. Y. Shum (Eds.), 7th Asian conference on computer vision (Vol. 3851, pp. 704–713). Lecture notes in computer science Berlin: Springer.
Shapiro, L. S., Zisserman, A., & Brady, M. (1995). 3d motion recovery via affine epipolar geometry. International Journal of Computer Vision, 16(2), 147–182.
Song, K., Hou, B., Niu, M., Wen, X., & Yan, Y. (2018). Flexible linescan camera calibration method using a coded eight trigrams pattern. Optics and Lasers in Engineering, 110, 296–307.
Steger, C. (1998a). An unbiased detector of curvilinear structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(2), 113–125.
Steger, C. (1998b). Unbiased extraction of curvilinear structures from 2D and 3D images. Dissertation, Fakultät für Informatik, Technische Universität München.
Steger, C. (2000). Subpixelprecise extraction of lines and edges. International Archives of Photogrammetry and Remote Sensing, 33(B3), 141–156.
Steger, C. (2012). Estimating the fundamental matrix under pure translation and radial distortion. ISPRS Journal of Photogrammetry and Remote Sensing, 74, 202–217.
Steger, C. (2013). Unbiased extraction of lines with parabolic and Gaussian profiles. Computer Vision and Image Understanding, 117(2), 97–112.
Steger, C. (2017). A comprehensive and versatile camera model for cameras with tilt lenses. International Journal of Computer Vision, 123(2), 121–159.
Steger, C. (2018). Algorithms for the orthographic\(n\)point problem. Journal of Mathematical Imaging and Vision, 60(2), 246–266.
Steger, C., Ulrich, M., & Wiedemann, C. (2008). Machine vision algorithms and applications. Weinheim: Wiley.
Steger, C., Ulrich, M., & Wiedemann, C. (2018). Machine vision algorithms and applications (2nd ed.). Weinheim: Wiley.
Sturm, P., Ramalingam, S., Tardif, J. P., Gasparini, S., & Barreto, J. (2010). Camera models and fundamental concepts used in geometric computer vision. Foundations and Trends in Computer Graphics and Vision, 6(1–2), 1–183.
Sun, B., Zhu, J., Yang, L., Guo, Y., & Lin, J. (2017). Stereo linescan sensor calibration for 3D shape measurement. Applied Optics, 56(28), 7905–7914.
Sun, B., Zhu, J., Yang, L., Yang, S., & Guo, Y. (2016a). Sensor for inmotion continuous 3D shape measurement based on dual linescan cameras. Sensors, 16(11), 19491–194915.
Sun, B., Zhu, J., Yang, L., Yang, S., & Niu, Z. (2016b). Calibration of linescan cameras for precision measurement. Applied Optics, 55(25), 6836–6843.
Ulrich, M., & Steger, C. (2019). A camera model for cameras with hypercentric lenses and some example applications. Machine Vision and Applications, 30(6), 1013–1028.
Yao, M., Zhao, Z., & Xu, B. (2014). Geometric calibration of linescan camera using a planar pattern. Journal of Electronic Imaging, 23(1), 013028–1–013028–8.
Ye, J., & Yu, J. (2014). Ray geometry in nonpinhole cameras: a survey. The Visual Computer, 30(1), 93–112.
Zhang, J., Zhang, Z., Shen, F., Zhang, F., & Su, H. (2018). A new calibration method of line scan camera for highprecision twodimensional measurement. In 14th International conference on automation science and engineering (pp 678–683).
Acknowledgements
We are grateful to Lorenz Zistler for helping us to set up the hardware for the experiments in Sect. 5.
Author information
Authors and Affiliations
Contributions
Carsten Steger designed, analyzed, and implemented the camera model and wrote Sects. 2–4 and 6. Markus Ulrich designed the experiments, acquired the images, performed the experiments, and wrote Sect. 5. Both authors wrote Sect. 1 and critically reviewed the paper.
Corresponding author
Additional information
Communicated by Jun Sato.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Steger, C., Ulrich, M. A Camera Model for LineScan Cameras with Telecentric Lenses. Int J Comput Vis 129, 80–99 (2021). https://doi.org/10.1007/s11263020013583
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11263020013583
Keywords
 Linescan cameras
 Telecentric lenses
 Camera models
 Camera model degeneracies
 Camera calibration