Abstract
Line triangulation, a foundational problem in computer vision, is to estimate the 3D line position from a set of measured image lines with known camera projection matrices. Aiming to improve the triangulation’s efficiency, in this work, two algorithms are proposed to find suboptimal solutions under the algebraic-error optimality criterion of the Plücker line coordinates. In these proposed algorithms, the algebraic-error optimality criterion is reformulated by the transformation of the Klein constraint. By relaxing the quadratic unit norm constraint to six linear constraints, six new single-quadric-constraint optimality criteria are constructed in the new formulation, whose optimal solutions can be obtained by solving polynomial equations. Moreover, we prove that the minimum algebraic error of either the first three or the last three of the six new criteria is not more than \(\sqrt{3}\) times of that of the original algebraic-error optimality criterion. Thus, with three new criteria and all the six criteria, suboptimal solutions under the algebraic error minimization and the geometric error minimization are obtained. Experimental results show the effectiveness of our proposed algorithms.
Similar content being viewed by others
References
Hartley, R., Sturm, P.: Triangulation. Comput. Vis. Image Underst. 68(2), 146–157 (1997)
Wu, F.C., Zhang, Q., Hu, Z.Y.: Efficient suboptimal solutions to the optimal triangulation. Int. J. Comput. Vis. 91(1), 77–106 (2011)
Kanatani, K., Sugaya, Y., Niitsuma, H.: Triangulation from two views revisited: Hartley-Sturm vs. optimal correction. In: Proceedings of the British Machine Vision Conference, pp. 173–182 (2008)
Lindstrom, P.: Triangulation made easy. In: Proc. Computer Vision and Pattern Recognition, pp. 1554–1561 (2010)
Olsson, C., Kahl, F., Okarsson, M.: Branch and bound methods for Euclidean registration problems. IEEE Trans. Pattern Anal. Mach. Intell. 31(5), 783–794 (2009)
Olsson, C., Kahl, F., Hartley, R.: Projective least-squares: global solutions with local optimization. In: Proc. Computer Vision and Pattern Recognition, pp. 1216–1223 (2009)
Lu, F., Hartley, R.: A fast optimal algorithm for L 2 triangulation. In: Proc. Asian Conference on Computer Vision, pp. 279–288 (2007)
Hartley, R., Schaffalitzky, F.: L ∞ minimization in geometric reconstruction problems. In: Proc. Computer Vision and Pattern Recognition, pp. 504–509 (2004)
Ke, Q., Kanade, T.: Quasiconvex optimization for robust geometric reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 29(10), 1834–1847 (2007)
Kahl, F., Hartley, R.: Multiple-view geometry under the L ∞-norm. IEEE Trans. Pattern Anal. Mach. Intell. 30(9), 1603–1617 (2008)
Xiao, J., Fang, T., Tan, P., Zhao, P., Ofek, E., Quan, L.: Image-based facade modeling. ACM Trans. Graph. 27(5), 1–10 (2008)
Xiao, J., Fang, T., Zhao, P., Maxime, L., Quan, L.: Image-based street-side city modeling. ACM Trans. Graph. 28(5), 1–12 (2009)
Agarwal, S., Snavely, N.: Building Rome in a day. In: Proc. International Conference of Computer Vision, pp. 72–79 (2009)
Beardsley, P.A., Zisserman, A., Murray, D.: Navigation using affine structure from motion. In: Proc. European Conference on Computer Vision, pp. 85–96 (1994)
Bartoli, A., Sturm, P.: Structure-from-motion using lines: representation, triangulation, and bundle adjustment. Comput. Vis. Image Underst. 100(3), 416–441 (2005)
Spetsakis, M., Aloimonos, J.: Structure from motion using line correspondences. Int. J. Comput. Vis. 4(3), 171–183 (1990)
Weng, J., Huang, T., Ahuja, N.: Motion and structure from line correspondences: closed-form solution, uniqueness, and optimization. IEEE Trans. Pattern Anal. Mach. Intell. 14(3), 318–336 (1992)
Josephson, K., Kahl, F.: Triangulation of points, lines and conics. J. Math. Imaging Vis. 32(2), 215–225 (2008)
Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge Univ. Press, Cambridge (2003)
Ressl, C.: Geometry, constraints and computation of the trifocal tensor. PhD thesis, Technical University of Vienna (2003)
Acknowledgements
We wish to thank the anonymous reviewers for their inspiring comments and suggestions. Also, we gratefully acknowledge the support from the Open Project Program of the National Laboratory of Pattern Recognition (NLPR) (201204243), the Fundamental Research Funds for the Central Universities (K5051302009), the Natural Science Foundation of China (Nos. 61272281, 61271297, 61375042), and the Specialized Research Fund for the Doctoral Program of Higher Education (No. 20110203110001).
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A
For
where λ is unknown, its determinant can be expanded as:
where [⋅] i is the sum of all the i×i principal minors, I n is the n×n identity matrix and n is the order of M. Then, from (17) we know that just the last three columns of the adjoint matrix of M are needed. So, here we just discuss the cofactors of the last three rows in M and there are three situations:
(1) Elements on the main diagonal: their cofactors can be obtained according to (19).
(2) Nonzero elements at the ith row and the jth column (i≠j): their cofactors are the determinants of matrices like the following form, where the matrices are obtained by row or column exchanges of the submatrices formed by deleting the ith row and the jth column:
(3) Zero elements at the ith row and the jth column (i≠j): their cofactors are the negative determinants of matrices like the following form, where the matrices are obtained by row or column exchanges of the submatices formed by deleting the ith row and the jth column:
Appendix B
If the first m i elements in are zero, we discuss how to compute the candidates of \(( \hat{\mathbf{L}}_{1,2}^{T},\hat{\mathbf{L}}_{1,3}^{T} )^{T}\) from the following two situations.
2.1 B.1 Situation m i =1
where μ i is an unknown, is a constant vector. Substituting (20) into the second equation in (15), we obtain a quadratic equation. The candidates of \(( \hat{\mathbf{L}}_{1,2}^{T},\hat{\mathbf{L}}_{1,3}^{T} )^{T}\) can be obtained by real roots of the quadratic equation.
2.2 B.2 Situation m i >1
where μ i is an m i -dimensional unknown vector, is a constant vector when m i ≤3 or a constant when m i =4. Substituting (21) into the second equation in (15), a quadratic equation of m i variants is obtained, which has infinitely many solutions. To reduce the number of the candidates of \(( \hat{\mathbf{L}}_{1,2}^{T},\hat{\mathbf{L}}_{1,3}^{T} )^{T}\) from the quadratic equation, we select the candidate that gives the smallest value of the cost function in (13), which is equivalent to the optimal solution to the minimization problem of m i -vector μ i constructed by substituting (21) into the criterion (13)
or equivalently
where
\(\mathbf{d}_{1} = ( \mathbf{v}_{i}^{T},1 )\mathrm{D}_{12}^{T}\), , \(\mathbf{s}_{1} = ( \mathbf{v}_{i}^{T},1 )\mathrm{S}_{12}^{T}\), and S11 are the upper-left m i ×m i submatrix of D and S,D12 and S12 are the upper-right m i ×(6−m i ) submatrices of D and S,D22 and S22 are the lower-right (6−m i )×(6−m i ) submatrices of D and S, respectively.
By Lagrange’s multiplier method, we obtain the following Lagrange’s equations according to (22):
where γ is a multiplier. Here, three cases need to be considered respectively and the optimal solution is selected from the candidates of μ i in these three cases.
- Case A::
-
det(D11+γS11)≠0
According to the first equation in (23), we obtain
$$ \boldsymbol{\mu}_{i} = - \frac{ ( \mathrm{D}_{11} + \gamma \mathrm{S}_{11} )^{ *} ( \mathbf{d}_{1} + \gamma\mathbf{s}_{1} )}{\det ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )} $$(24)where (D11+γS11)∗ is the adjoint matrix of D11+γS11. Substituting (24) into the second equation of (23), an equation of degree 2m i (2≤m i ≤4) is obtained
$$ \begin{aligned}[b] &( \mathbf{d}_{1} + \gamma\mathbf{s}_{1} )^{T} ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )^{ * T}\mathrm{S}_{11} ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )^{ *} ( \mathbf{d}_{1} + \gamma\mathbf{s}_{1} ) \\ &\quad {}- 2\det ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )\mathbf{s}_{1} ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )^{ *} ( \mathbf{d}_{1} + \gamma\mathbf{s}_{1} ) \\ &\quad {}+ s_{2}\det ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )^{2} = 0 \end{aligned} $$(25)Because the order of the square matrix D11+γS11 is not more than 4, its determinant and adjoint matrix can be computed easily. By choosing real roots of (25) which do not make det(D11+γS11)=0 and substituting them into (24), we can obtain the candidates of μ i .
- Case B::
-
A root γ′ of det(D11+γS11)=0 makes the first equation in (23) solvable and only one eigenvalue of matrix D11+γ′S11 is zero.
Referring to Situation m i =1, by the eigen-decomposition of D11+γ′S11, an equation like (20) is obtained. Substituting the equation into the second equation of (23), if there exists real roots, like Situation m i =1 the candidates of μ i can be obtained with the real roots.
- Case C::
-
A root γ″ of det(D11+γS11)=0 makes the first equation in (23) solvable and more than one eigenvalue of the matrix D11+γ″S11 are zero.
The problem returns to Situation m i >1. By the eigen-decomposition of D11+γ″S11, an equation like (21) is obtained, then a minimization problem like (22) is constructed, which can be solved just like Case A, Case B and Case C. Because the Case C is similar to Situation m i >1, the details to obtain the candidates of μ i will be skipped here.
Rights and permissions
About this article
Cite this article
Zhang, Q., Wu, Y., Wang, F. et al. Suboptimal Solutions to the Algebraic-Error Line Triangulation. J Math Imaging Vis 49, 611–632 (2014). https://doi.org/10.1007/s10851-013-0491-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851-013-0491-y