Skip to main content
Log in

Suboptimal Solutions to the Algebraic-Error Line Triangulation

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

Line triangulation, a foundational problem in computer vision, is to estimate the 3D line position from a set of measured image lines with known camera projection matrices. Aiming to improve the triangulation’s efficiency, in this work, two algorithms are proposed to find suboptimal solutions under the algebraic-error optimality criterion of the Plücker line coordinates. In these proposed algorithms, the algebraic-error optimality criterion is reformulated by the transformation of the Klein constraint. By relaxing the quadratic unit norm constraint to six linear constraints, six new single-quadric-constraint optimality criteria are constructed in the new formulation, whose optimal solutions can be obtained by solving polynomial equations. Moreover, we prove that the minimum algebraic error of either the first three or the last three of the six new criteria is not more than \(\sqrt{3}\) times of that of the original algebraic-error optimality criterion. Thus, with three new criteria and all the six criteria, suboptimal solutions under the algebraic error minimization and the geometric error minimization are obtained. Experimental results show the effectiveness of our proposed algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Hartley, R., Sturm, P.: Triangulation. Comput. Vis. Image Underst. 68(2), 146–157 (1997)

    Article  Google Scholar 

  2. Wu, F.C., Zhang, Q., Hu, Z.Y.: Efficient suboptimal solutions to the optimal triangulation. Int. J. Comput. Vis. 91(1), 77–106 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  3. Kanatani, K., Sugaya, Y., Niitsuma, H.: Triangulation from two views revisited: Hartley-Sturm vs. optimal correction. In: Proceedings of the British Machine Vision Conference, pp. 173–182 (2008)

    Google Scholar 

  4. Lindstrom, P.: Triangulation made easy. In: Proc. Computer Vision and Pattern Recognition, pp. 1554–1561 (2010)

    Google Scholar 

  5. Olsson, C., Kahl, F., Okarsson, M.: Branch and bound methods for Euclidean registration problems. IEEE Trans. Pattern Anal. Mach. Intell. 31(5), 783–794 (2009)

    Article  Google Scholar 

  6. Olsson, C., Kahl, F., Hartley, R.: Projective least-squares: global solutions with local optimization. In: Proc. Computer Vision and Pattern Recognition, pp. 1216–1223 (2009)

    Google Scholar 

  7. Lu, F., Hartley, R.: A fast optimal algorithm for L 2 triangulation. In: Proc. Asian Conference on Computer Vision, pp. 279–288 (2007)

    Google Scholar 

  8. Hartley, R., Schaffalitzky, F.: L minimization in geometric reconstruction problems. In: Proc. Computer Vision and Pattern Recognition, pp. 504–509 (2004)

    Google Scholar 

  9. Ke, Q., Kanade, T.: Quasiconvex optimization for robust geometric reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 29(10), 1834–1847 (2007)

    Article  Google Scholar 

  10. Kahl, F., Hartley, R.: Multiple-view geometry under the L -norm. IEEE Trans. Pattern Anal. Mach. Intell. 30(9), 1603–1617 (2008)

    Article  Google Scholar 

  11. Xiao, J., Fang, T., Tan, P., Zhao, P., Ofek, E., Quan, L.: Image-based facade modeling. ACM Trans. Graph. 27(5), 1–10 (2008)

    Article  Google Scholar 

  12. Xiao, J., Fang, T., Zhao, P., Maxime, L., Quan, L.: Image-based street-side city modeling. ACM Trans. Graph. 28(5), 1–12 (2009)

    Article  Google Scholar 

  13. Agarwal, S., Snavely, N.: Building Rome in a day. In: Proc. International Conference of Computer Vision, pp. 72–79 (2009)

    Google Scholar 

  14. Beardsley, P.A., Zisserman, A., Murray, D.: Navigation using affine structure from motion. In: Proc. European Conference on Computer Vision, pp. 85–96 (1994)

    Google Scholar 

  15. Bartoli, A., Sturm, P.: Structure-from-motion using lines: representation, triangulation, and bundle adjustment. Comput. Vis. Image Underst. 100(3), 416–441 (2005)

    Article  Google Scholar 

  16. Spetsakis, M., Aloimonos, J.: Structure from motion using line correspondences. Int. J. Comput. Vis. 4(3), 171–183 (1990)

    Article  Google Scholar 

  17. Weng, J., Huang, T., Ahuja, N.: Motion and structure from line correspondences: closed-form solution, uniqueness, and optimization. IEEE Trans. Pattern Anal. Mach. Intell. 14(3), 318–336 (1992)

    Article  Google Scholar 

  18. Josephson, K., Kahl, F.: Triangulation of points, lines and conics. J. Math. Imaging Vis. 32(2), 215–225 (2008)

    Article  MathSciNet  Google Scholar 

  19. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge Univ. Press, Cambridge (2003)

    Google Scholar 

  20. Ressl, C.: Geometry, constraints and computation of the trifocal tensor. PhD thesis, Technical University of Vienna (2003)

Download references

Acknowledgements

We wish to thank the anonymous reviewers for their inspiring comments and suggestions. Also, we gratefully acknowledge the support from the Open Project Program of the National Laboratory of Pattern Recognition (NLPR) (201204243), the Fundamental Research Funds for the Central Universities (K5051302009), the Natural Science Foundation of China (Nos. 61272281, 61271297, 61375042), and the Specialized Research Fund for the Doctoral Program of Higher Education (No. 20110203110001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiang Zhang.

Appendices

Appendix A

For

$$\mathrm{M} ( \lambda ) = \left( \begin{array}{c@{\quad}c} \varSigma '_{1} + \lambda\mathrm{I}_{2} & \mathrm{B}_{2} \\ \mathrm{B}_{2}^{T} & \varSigma_{2} - \lambda\mathrm{I}_{3} \end{array} \right), $$

where λ is unknown, its determinant can be expanded as:

$$\begin{aligned} \det(\mathrm{M}) =& - \det\bigl(\left( \begin{array}{c@{\quad}c} \varSigma'_{1} & - \mathrm{B}_{2} \\ \mathrm{B}_{2}^{T} & - \varSigma_{2} \end{array} \right) + \lambda \mathrm{I}_{n}\bigr) \\ =& - \lambda^{n} - \sum_{i = 1}^{n} \left[ \left( \begin{array}{c@{\quad}c} \varSigma'_{1} & - \mathrm{B}_{2} \\ \mathrm{B}_{2}^{T} & - \varSigma_{2} \end{array} \right) \right]_{i} \lambda^{n - i} \end{aligned}$$
(19)

where [⋅] i is the sum of all the i×i principal minors, I n is the n×n identity matrix and n is the order of M. Then, from (17) we know that just the last three columns of the adjoint matrix of M are needed. So, here we just discuss the cofactors of the last three rows in M and there are three situations:

(1) Elements on the main diagonal: their cofactors can be obtained according to (19).

(2) Nonzero elements at the ith row and the jth column (ij): their cofactors are the determinants of matrices like the following form, where the matrices are obtained by row or column exchanges of the submatrices formed by deleting the ith row and the jth column:

$$\begin{aligned} &{\left\vert \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} 0 & h_{12} & h_{13} & h_{14} \\ p + \lambda& h_{22} & h_{23} & h_{24} \\ h_{31} & 0 & q_{1} - \lambda& 0 \\ h_{41} & 0 & 0 & q_{2} - \lambda \end{array} \right\vert} \\ &{\quad = - h_{12}\lambda^{3} - h_{12}(p - q_{1} - q_{2})\lambda^{2}- (h_{12}h_{23}h_{31}} \\ &{\qquad {} + h_{12}h_{24}h_{41} + h_{12}q_{1}q_{2} - h_{13}h_{22}h_{31} - h_{14}h_{22}h_{41}} \\ &{\qquad {}- h_{12}pq_{1} - h_{12}pq_{2})\lambda+ h_{12}h_{24}h_{41}q_{1} + h_{12}h_{23}h_{31}q_{2}} \\ &{\qquad {}- h_{14}h_{22}h_{41}q_{1} - h_{13}h_{22}h_{31}q_{2} - h_{12}pq_{1}q_{2}} \end{aligned}$$

(3) Zero elements at the ith row and the jth column (ij): their cofactors are the negative determinants of matrices like the following form, where the matrices are obtained by row or column exchanges of the submatices formed by deleting the ith row and the jth column:

$$\begin{aligned} &{\left\vert \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} p_{1} + \lambda& 0 & h_{13} & h_{14} \\ 0 & p_{2} + \lambda& h_{23} & h_{24} \\ h_{31} & h_{32} & 0 & 0 \\ h_{41} & h_{42} & 0 & q - \lambda \end{array} \right\vert} \\ &{\quad = ( h_{13}h_{31} + h_{23}h_{32} )\lambda^{2}} \\ &{\qquad {}+ ( h_{23} h_{32}p_{1} + h_{13}h_{31}p_{2} - h_{13}h_{31}q - h_{23} h_{32}q )\lambda} \\ &{\qquad {}+ h_{14}h_{23}h_{32}h_{41} + h_{13}h_{24}h_{31}h_{42} - h_{13}h_{24}h_{32}h_{41}} \\ &{\qquad {}- h_{14}h_{23}h_{31}h_{42} - h_{23} h_{32}p_{1}q - h_{13}h_{31}p_{2}q} \end{aligned}$$

Appendix B

If the first m i elements in are zero, we discuss how to compute the candidates of \(( \hat{\mathbf{L}}_{1,2}^{T},\hat{\mathbf{L}}_{1,3}^{T} )^{T}\) from the following two situations.

2.1 B.1 Situation m i =1

$$ \left( \begin{array}{c} \hat{\mathbf{L}}_{1,2} \\ \hat{\mathbf{L}}_{1,3} \end{array} \right) = \mathrm{U}_{i}\left( \begin{array}{c} \mu_{i} \\ \mathbf{v}_{i} \end{array} \right) $$
(20)

where μ i is an unknown, is a constant vector. Substituting (20) into the second equation in (15), we obtain a quadratic equation. The candidates of \(( \hat{\mathbf{L}}_{1,2}^{T},\hat{\mathbf{L}}_{1,3}^{T} )^{T}\) can be obtained by real roots of the quadratic equation.

2.2 B.2 Situation m i >1

$$ \left( \begin{array}{c} \hat{\mathbf{L}}_{1,2} \\ \hat{\mathbf{L}}_{1,3} \end{array} \right) = \mathrm{U}_{i}\left( \begin{array}{c} \boldsymbol{\mu}_{i} \\ \mathbf{v}_{i} \end{array} \right) $$
(21)

where μ i is an m i -dimensional unknown vector, is a constant vector when m i ≤3 or a constant when m i =4. Substituting (21) into the second equation in (15), a quadratic equation of m i variants is obtained, which has infinitely many solutions. To reduce the number of the candidates of \(( \hat{\mathbf{L}}_{1,2}^{T},\hat{\mathbf{L}}_{1,3}^{T} )^{T}\) from the quadratic equation, we select the candidate that gives the smallest value of the cost function in (13), which is equivalent to the optimal solution to the minimization problem of m i -vector μ i constructed by substituting (21) into the criterion (13)

$$\begin{array}{l@{\quad}l} \displaystyle \min_{\boldsymbol{\mu}_{i}} & ( \boldsymbol{\mu}_{i}^{T}, \mathbf{v}_{i}^{T},1 )\mathrm{D}\left( \begin{array}{c} \boldsymbol{\mu}_{i} \\ \mathbf{v}_{i} \\ 1 \end{array} \right) \\ \displaystyle \mbox{s.t. } & \bigl( \boldsymbol{\mu}_{i}^{T}, \mathbf{v}_{i}^{T},1 \bigr)\mathrm{S}\left( \begin{array}{c} \boldsymbol{\mu}_{i} \\ \mathbf{v}_{i} \\ 1 \end{array} \right) = 0 \end{array} $$

or equivalently

$$ \begin{array}{l@{\quad}l} \min & \boldsymbol{\mu}_{i}^{T}\mathrm{D}_{11}\boldsymbol{\mu}_{i} + 2\mathbf{d}_{1}\boldsymbol{\mu}_{i} + d_{2} \\ \mbox{s.t. }& \boldsymbol{\mu}_{i}^{T}\mathrm{S}_{11}\boldsymbol{\mu}_{i} + 2\mathbf{s}_{1}\boldsymbol{\mu}_{i} + s_{2} = 0 \end{array} $$
(22)

where

$$\begin{aligned} &{\mathrm{D} = \left( \begin{array}{c@{\quad}c} \mathrm{U}_{i}^{T} & 0 \\ 0 & 1 \end{array} \right)\left( \begin{array}{c@{\quad}c@{\quad}c} \varSigma'_{1} & \mathrm{B}_{2} & 0 \\ \mathrm{B}_{2}^{T} & \varSigma_{2} & \mathbf{B}_{1}^{T} \\ 0 & \mathbf{B}_{1} & \sigma_{1} \end{array} \right)\left( \begin{array}{c@{\quad}c} \mathrm{U}_{i} & 0 \\ 0 & 1 \end{array} \right),} \\ &{\mathrm{S} = \left( \begin{array}{c@{\quad}c} \mathrm{U}_{i}^{T} & 0 \\ 0 & 1 \end{array} \right)\left( \begin{array}{c@{\quad}c@{\quad}c} \mathrm{I}_{2} & 0 & 0 \\ 0 & - \mathrm{I}_{3} & 0 \\ 0 & 0 & 1 \end{array} \right)\left( \begin{array}{c@{\quad}c} \mathrm{U}_{i} & 0 \\ 0 & 1 \end{array} \right),} \end{aligned}$$

\(\mathbf{d}_{1} = ( \mathbf{v}_{i}^{T},1 )\mathrm{D}_{12}^{T}\), , \(\mathbf{s}_{1} = ( \mathbf{v}_{i}^{T},1 )\mathrm{S}_{12}^{T}\), and S11 are the upper-left m i ×m i submatrix of D and S,D12 and S12 are the upper-right m i ×(6−m i ) submatrices of D and S,D22 and S22 are the lower-right (6−m i )×(6−m i ) submatrices of D and S, respectively.

By Lagrange’s multiplier method, we obtain the following Lagrange’s equations according to (22):

$$ \left\{ \begin{array}{l} ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )\boldsymbol{\mu}_{i} + \mathbf{d}_{1} + \gamma\mathbf{s}_{1} = 0 \\ \boldsymbol{\mu}_{i}^{T}\mathrm{S}_{11}\boldsymbol{\mu}_{i} + 2\mathbf{s}_{1}\boldsymbol{\mu}_{i} + s_{2} = 0 \end{array} \right. $$
(23)

where γ is a multiplier. Here, three cases need to be considered respectively and the optimal solution is selected from the candidates of μ i in these three cases.

Case A::

det(D11+γS11)≠0

According to the first equation in (23), we obtain

$$ \boldsymbol{\mu}_{i} = - \frac{ ( \mathrm{D}_{11} + \gamma \mathrm{S}_{11} )^{ *} ( \mathbf{d}_{1} + \gamma\mathbf{s}_{1} )}{\det ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )} $$
(24)

where (D11+γS11) is the adjoint matrix of D11+γS11. Substituting (24) into the second equation of (23), an equation of degree 2m i (2≤m i ≤4) is obtained

$$ \begin{aligned}[b] &( \mathbf{d}_{1} + \gamma\mathbf{s}_{1} )^{T} ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )^{ * T}\mathrm{S}_{11} ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )^{ *} ( \mathbf{d}_{1} + \gamma\mathbf{s}_{1} ) \\ &\quad {}- 2\det ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )\mathbf{s}_{1} ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )^{ *} ( \mathbf{d}_{1} + \gamma\mathbf{s}_{1} ) \\ &\quad {}+ s_{2}\det ( \mathrm{D}_{11} + \gamma\mathrm{S}_{11} )^{2} = 0 \end{aligned} $$
(25)

Because the order of the square matrix D11+γS11 is not more than 4, its determinant and adjoint matrix can be computed easily. By choosing real roots of (25) which do not make det(D11+γS11)=0 and substituting them into (24), we can obtain the candidates of μ i .

Case B::

A root γ′ of det(D11+γS11)=0 makes the first equation in (23) solvable and only one eigenvalue of matrix D11+γ′S11 is zero.

Referring to Situation m i =1, by the eigen-decomposition of D11+γ′S11, an equation like (20) is obtained. Substituting the equation into the second equation of (23), if there exists real roots, like Situation m i =1 the candidates of μ i can be obtained with the real roots.

Case C::

A root γ″ of det(D11+γS11)=0 makes the first equation in (23) solvable and more than one eigenvalue of the matrix D11+γ″S11 are zero.

The problem returns to Situation m i >1. By the eigen-decomposition of D11+γ″S11, an equation like (21) is obtained, then a minimization problem like (22) is constructed, which can be solved just like Case A, Case B and Case C. Because the Case C is similar to Situation m i >1, the details to obtain the candidates of μ i will be skipped here.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhang, Q., Wu, Y., Wang, F. et al. Suboptimal Solutions to the Algebraic-Error Line Triangulation. J Math Imaging Vis 49, 611–632 (2014). https://doi.org/10.1007/s10851-013-0491-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-013-0491-y

Keywords

Navigation