Skip to main content

On the Existence of Epipolar Matrices

Abstract

This paper considers the foundational question of the existence of a fundamental (resp. essential) matrix given m point correspondences in two views. We present a complete answer for the existence of fundamental matrices for any value of m. We disprove the widely held beliefs that fundamental matrices always exist whenever \(m \le 7\). At the same time, we prove that they exist unconditionally when \(m \le 5\). Under a mild genericity condition, we show that an essential matrix always exists when \(m \le 4\). We also characterize the six and seven point configurations in two views for which all matrices satisfying the epipolar constraint have rank at most one.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2

References

  1. Agarwal, S., Lee, H. -L., Sturmfels, B., & Thomas, R. R. (2014). Certifying the existence of epipolar matrices. http://arxiv.org/abs/1407.5367.

  2. Chum, O., Werner, O., & Matas, J. (2005). Two-view geometry estimation unaffected by a dominant plane. CVPR (1) (pp. 772–779). Washington, DC: IEEE Computer Society.

    Google Scholar 

  3. Cox, D. A., Little, J., & O’Shea, D. (2007). Ideals, varieties, and algorithms: An introduction to computational algebraic geometry and commutative algebra. Secaucus, NJ: Springer.

    Book  MATH  Google Scholar 

  4. Dalbec, J., & Sturmfels, B. (1995). Introduction to chow forms. Invariant methods in discrete and computational geometry (pp. 37–58). New York: Springer.

    Chapter  Google Scholar 

  5. Demazure, M. (1988). Sur deux problemes de reconstruction. Technical Report 992, INRIA.

  6. Faugeras, O. D. (1992). What can be seen in three dimensions with an uncalibrated stereo rig. European conference on computer vision (pp. 563–578). Berlin: Springer.

    Google Scholar 

  7. Faugeras, O. D., & Maybank, S. (1990). Motion from point matches: Multiplicity of solutions. International Journal of Computer Vision, 4(3), 225–246.

    Article  MATH  Google Scholar 

  8. Flanders, H. (1962). On spaces of linear transformations with bounded rank. Journal of the London Mathematical Society, 1(1), 10–16.

    MathSciNet  Article  MATH  Google Scholar 

  9. Gelfand, I. M., Kapranov, M. M., & Zelevinsky, A. (1994). Discriminants, resultants, and multidimensional determinants. Boston: Birkhäuser.

  10. Harris, J. (1992). Algebraic geometry: A first course (Vol. 133). Berlin: Springer.

    MATH  Google Scholar 

  11. Hartley, R. (1992a). Estimation of relative camera positions for uncalibrated cameras. In European conference on computer vision (pp. 579–587). Berlin: Springer.

  12. Hartley, R. (1992b). Stereo from uncalibrated cameras. In IEEE conference on computer vision and pattern recognition (pp. 761–764). IEEE.

  13. Hartley, R. (1994). Projective reconstruction and invariants from multiple images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(10), 1036–1041.

  14. Hartley, R. (in press). Computation of the essential matrix from 6 points. Tech Rep. Schenectady, NY: GE-CRD.

  15. Hartley, R., & Zisserman, A. (2003). Multiview geometry in computer vision (2nd ed.). Cambridge: Cambridge University Press.

  16. Harville, D. A. (1997). Matrix algebra from a statistician’s perspective, (Vol. 1). New York: Springer.

    Book  MATH  Google Scholar 

  17. Kneip, L., Siegwart, R., & Pollefeys, M. (2012). Finding the exact rotation between two images independently of the translation. European conference on computer vision. Berlin: Springer.

  18. Longuet Higgins, H. C. (1981). A computer algorithm for reconstructing a scene from two projections. Nature, 293, 133–135.

    Article  Google Scholar 

  19. Ma, Y., Soatto, S., Kosecka, J., & Sastry, S. S. (2012). An invitation to 3-d vision: From images to geometric models (Vol. 26). New York: Springer.

    MATH  Google Scholar 

  20. Marshall, M. (2008). Positive polynomials and sums of squares (Vol. 146). Providence: American Mathematical Society.

    MATH  Google Scholar 

  21. Maybank, S. (1993). Theory of reconstruction from image motion (Vol. 28). New York: Springer.

    MATH  Google Scholar 

  22. Meshulam, R. (1985). On the maximal rank in a subspace of matrices. The Quarterly Journal of Mathematics, 36(2), 225–229.

    MathSciNet  Article  MATH  Google Scholar 

  23. Nistér, D. (2004). An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(6), 756–777.

    Article  Google Scholar 

  24. Semple, J., & Kneebone, G. (1998). Algebraic projective geometry. Oxford: Oxford University Press.

    MATH  Google Scholar 

  25. Shafarevich, I. R. (2013). Basic algebraic geometry 1: Varieties in projective space (3rd ed.). New York: Springer.

    Book  MATH  Google Scholar 

  26. Stewénius, H. (2005). Gröbner basis methods for minimal problems in computer vision. Lund: Lund Institute for Technology: Centre for Mathematical Sciences, Lund University.

    Google Scholar 

  27. Sturm, R. (1869). Das Problem der Projectivität und seine Anwendung auf die Flächen zweiten Grades. Mathematische Annalen, 1(4), 533–574.

  28. Vandenberghe, L., & Boyd, S. (1996). Semidefinite programming. SIAM Review, 38(1), 49–95.

    MathSciNet  Article  MATH  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Sameer Agarwal.

Additional information

Lee and Thomas were partially supported by NSF Grant DMS-1418728, and Sturmfels by NSF Grant DMS-1419018.

Communicated by Jun Sato.

Appendices

Appendix 1: Proof of Lemma 5

Since \(l-x_0\) and \(m-y_0\) are lines in \({\mathbb {R}}^2\) passing through the origin, one can choose an orthogonal matrix \(W\in {\mathbb {R}}^{2\times 2}\) such that \(m-y_0 = W(l-x_0)\). It follows that

$$\begin{aligned} m= W(l-x_0)+y_0 = Wl - Wx_0+y_0 = Wl +z \end{aligned}$$

where \(z:= y_0-Wx_0\) is a point in \({\mathbb {R}}^2\). Then, for the \(3\times 3\) matrix \(H:= \left({\begin{matrix} W &{} z \\ 0 &{} 1\end{matrix}}\right)\), one has \(\left({\begin{matrix} m \\ 1 \end{matrix}}\right) = H \left({\begin{matrix} l \\ 1 \end{matrix}}\right)\) which verifies the statement (2). In addition, \(H \left({\begin{matrix} x_0 \\ 1 \end{matrix}}\right) = \left({\begin{matrix} y_0 \\ 1 \end{matrix}}\right)\), and thus the assertion (1) also holds. \(\square \)

Appendix 2: Proof of Lemma 8

Recall that a fundamental matrix can be written in the form \( F = [b]_\times H \) where b is a non-zero vector in \({\mathbb {R}}^3\) and \(H\in {\mathbb {R}}^{3\times 3}\) is an invertible matrix. Then the epipolar constraints can be rewritten as

$$\begin{aligned}&y_i^\top F x_i = 0, \forall i = 1, \ldots , m. \nonumber \\ \iff&y_i^\top [b]_\times Hx_i = 0, \forall i = 1, \ldots , m. \nonumber \\ \iff&y_i^\top (b \times H x_i) = 0, \forall i = 1, \ldots , m. \end{aligned}$$
(19)
$$\begin{aligned} \iff&b^\top (y_i \times H x_i) = 0, \forall i = 1, \ldots , m.\nonumber \\ \iff&b^\top \begin{pmatrix} \cdots&y_i \times H x_i&\cdots \end{pmatrix} = 0. \end{aligned}$$
(20)

A non-zero b exists in the expression for F if and only if

$$\begin{aligned} \text {rank}\begin{pmatrix} \cdots&y_i \times H x_i&\cdots \end{pmatrix} < 3. \end{aligned}$$
(21)

The equivalence of (19) and (20) follows from the fact that \(p^\top (q \times r) = -q^\top (p \times r)\). The matrix in (21) is of size \(3 \times m\). A sufficient condition for it to have rank less than 3 is for \(m-2\) or more columns to be equal to zero. This is the case if we take \(H=A\) given in the assumption. \(\square \)

The observation about the scalar triple product and the resulting rank constraint has also been used by (Kneip et al. 2012) but only in the calibrated case.

Appendix 3: Proof of Theorem 2

If part: Suppose (1) holds and let \(\tau \) be the set given in (1). Then there is a \(u\in {{\mathbb {P}}}_{\mathbb {R}}^2\) such that \(u^\top y_i=0\) for any \(i\in \tau \). Let \(x_k\) be the single element in the set \(\{x_i\}_{i\notin \tau }\). Consider a basis \(\{v_1,v_2\}\subseteq {{\mathbb {P}}}_{\mathbb {R}}^2\) of the orthogonal complement of \(x_k\). For \(j=1,2\), define \(A_j = uv_j^\top \in {{\mathbb {P}}}_{\mathbb {R}}^{3\times 3}\) and let \(a_j\in {{\mathbb {P}}}_{\mathbb {R}}^8\) be its vectorization. Then \(\{a_1,a_2\}\) is a linearly independent set spanning a subset of \({\mathcal {R}}_1\). Moreover for any \(i=1,\ldots ,7\) and \(j=1,2\), \(y_i^\top A_j x_i = (y_i^\top u)(v_j^\top x_i)=0\). Hence \(a_j\in \ker _{\mathbb {R}}(Z)\) for \(j=1,2\). As \(\text {rank}(Z)=7\) [cf. (6)], \(\mathrm{ker}_{\mathbb {R}}(Z) =\mathrm{span}\{a_1,a_2\}\subseteq {\mathcal {R}}_1\). The same idea of proof works if (2) holds.

Only if part: Consider a basis \(\{a_1,a_2\}\subseteq {{\mathbb {P}}}_{\mathbb {R}}^8\) of \(\mathrm{ker}_{\mathbb {R}}(Z)\), which is inside \({\mathcal {R}}_1\), and assume \(a_j\) is the vectorization of \(A_j\in {{\mathbb {P}}}_{\mathbb {R}}^{3\times 3}\) for \(j=1,2\). For any j, \(\text {rank}(A_j)=1\), so \(A_j = u_jv_j^\top \) for some \(u_j,v_j\in {{\mathbb {P}}}_{\mathbb {R}}^2\). Since \(\text {rank}(A_1+A_2)=1\), a simple check shows that either \(\{u_1,u_2\}\) or \(\{v_1,v_2\}\) is linearly dependent. Thus, up to scaling, we may assume either \(u_1=u_2\) or \(v_1=v_2\). If \(u_1=u_2\), then \(\{v_1,v_2\}\) is linearly independent. In addition, \(0=y_i^\top A_j x_i = (y_i^\top u)(v_j^\top x_i)\) for each \(i=1,\ldots ,6\), \(j=1,2\). Thus, either \(y_i^\top u=0\) or \(x_i\in \mathrm{span}\{v_1,v_2\}^\perp \). Notice that \(\mathrm{span}\{v_1,v_2\}^\perp \) is a singleton in \({{\mathbb {P}}}_{\mathbb {R}}^2\). As \(\text {rank}(Z)=7\), by the paragraph after Lemma 7, neither “ \(y_i^\top u=0\) for all i” nor “\(x_i\in \mathrm{span}\{v_1,v_2\}^\perp \) for all i” can happen. Hence (1) holds with the nonempty proper subset \(\tau := \{i \ : \ y_i^\top u = 0\}\) of \(\{1,\ldots ,7\}\). If \(v_1=v_2\), by the same idea one sees that (2) holds. \(\square \)

Appendix 4: Proof of Theorem 4

Recall that we are assuming that Z has full row rank, i.e., \(m=\text {rank}(Z)=6\). By Lemma 7, this can only be true for \(m=6\) if \(x_i\) and \(y_i\) are not simultaneously collinear, i.e. one of X or Y has to have full row rank.

If part: If all points \(y_i\) are collinear in \({\mathbb {R}}^2\), then there is \(u\in {{\mathbb {P}}}_{\mathbb {R}}^2\) such that \(u^\top y_i=0\) for any \(i=1,\ldots ,6\). Let \(e_1=(1,0,0)^\top \), \(e_2 = (0,1,0)^\top \), \(e_3 = (0,0,1)^\top \). Consider the \(3\times 3\) matrices

$$\begin{aligned} A_j = ue_j^\top \text { for } j = 1,2,3 \end{aligned}$$

and their vectorizations \(a_j \in {{\mathbb {P}}}_{\mathbb {R}}^8\). Then, \(\{a_1,a_2,a_3\}\) is a linearly independent set spanning a subset of \({\mathcal {R}}_1\). Moreover, for any \(i=1,\ldots ,6\) and \(j=1,2,3\), \(y_i^\top A_j x_i = (y_i^\top u)(x^\top _i e_j)=0\). Hence \(a_j\in \ker _{\mathbb {R}}(Z)\). As \(\text {rank}(Z)=6\) (cf. (6)), \(\mathrm{ker}_{\mathbb {R}}(Z) = \mathrm{span}\{a_1,a_2,a_3\}\subseteq {\mathcal {R}}_1\). The same idea of proof works if all points \(x_i\) are collinear in \({\mathbb {R}}^2\).

Only if part: Consider a basis \(\{a_1,a_2,a_3\}\subseteq {{\mathbb {P}}}_{\mathbb {R}}^8\) of \(\mathrm{ker}_{\mathbb {R}}(Z)\), which is inside \({\mathcal {R}}_1\), and assume \(a_j\) is the vectorization of \(A_j\in {{\mathbb {P}}}_{\mathbb {R}}^{3\times 3}\) for \(j=1,2,3\). Then, by Lemma 2 with \(n=3\) and \(r=1\), up to taking transpose of all \(A_j\), there are non-zero vectors \(u,v_1,v_2,v_3\in {{\mathbb {P}}}_{\mathbb {R}}^2\) such that \(A_j = uv_j^\top \) for \(j=1,2,3\). The vectors \(v_j\) are linearly independent as \(A_j\) are. Moreover \(0=y_i^\top A_j x_i = (y_i^\top u)(x_i^\top v_j)\) for any \(i=1,\ldots ,6\), \(j=1,2,3\). We fix \(i\in \{1,\ldots ,6\}\) and claim that \(y_i^\top u=0\). Indeed, if \(y_i^\top u \ne 0\), then \(x_i^\top v_j= 0\) for any \(j=1,2,3\). As vectors \(v_j\) are linearly independent we have \(x_i=0\). This is impossible because \(x_i\) as a point in \({{\mathbb {P}}}_{\mathbb {R}}^2\) has non-zero third coordinate. Hence our claim is true and thus all points \(y_i\) are collinear in \({\mathbb {R}}^2\). If it is necessary to replace \(A_j\) by \(A_j^\top \), it follows that all points \(x_i\) are collinear in \({\mathbb {R}}^2\). \(\square \)

Appendix 5: Proof of Lemma 9

We first consider the case when L is a projective line, i.e.,

$$\begin{aligned} L = \{A\mu + B\eta \,:\, \mu ,\eta \in {\mathbb {R}}\} \end{aligned}$$

for some \(A,B \in {\mathbb {R}}^{3 \times 3}\), with B invertible. Then \(B^{-1}L = \{ B^{-1}A\mu + I\eta \,:\,\mu ,\eta \in {\mathbb {R}}\}\) is an isomorphic image of L and contains a matrix of rank two if and only if L does. Hence we can assume \(L = \{ M \mu - I \eta \,:\, \mu ,\eta \in {\mathbb {R}}\}\) for some \(M \in {\mathbb {R}}^{3 \times 3}\). The homogeneous cubic polynomial \(\det (M \mu - I\eta )\) is not identically zero on L. When dehomogenized by setting \(\mu =1\), it is the characteristic polynomial of M. Hence the three roots of \(\det (M \mu - I \eta )=0\) in \({{\mathbb {P}}}^1\) are \((\mu _1,\eta _1) \sim (1,\lambda _1), (\mu _2,\eta _2) \sim (1,\lambda _2)\) and \((\mu _3,\eta _3) \sim (1,\lambda _3)\) where \(\lambda _1, \lambda _2, \lambda _3\) are the eigenvalues of M. At least one of these roots is real since \(\det (M\mu - I\eta )\) is a cubic. Suppose \((\mu _1,\eta _1)\) is real. If \(\text {rank}(M\mu _1-I\eta _1) = \text {rank}(M-I\lambda _1) = 2\), then L contains a rank two matrix. Otherwise, \(\text {rank}(M-I\lambda _1) = 1\). Then \(\lambda _1\) is a double eigenvalue of M and hence equals one of \(\lambda _2\) or \(\lambda _3\). Suppose \(\lambda _1 = \lambda _2\). This implies that \((\mu _3,\eta _3)\) is a real root as well. If it is different from \((\mu _1,\eta _1)\), then it is a simple real root. Hence, \(\text {rank}(M\mu _3-I\eta _3)=2\), and L has a rank two matrix. So suppose \((\mu _1,\eta _1) \sim (\mu _2, \eta _2) \sim (\mu _3,\eta _3) \sim (1,\lambda )\) where \(\lambda \) is the unique eigenvalue of M. In that case, \(\det (M\mu - I\eta ) = \alpha \cdot (\eta -\lambda \mu )^3\) for some constant \(\alpha \). This finishes the case \(\dim (L)=1\).

Now suppose \(\dim (L) \ge 2\). If \(\det \) restricted to L is not a power of a homogeneous linear polynomial, then there exists a projective line \(L'\) in L such that \(\det \) restricted to \(L'\) is also not the power of a homogeneous linear polynomial. The projective line \(L'\) contains a matrix of rank two by the above argument. \(\square \)

Appendix 6: A Proof for the Existence of a Fundamental Matrix When \(m \le 4\)

Theorem 9

If \(m\le 4\), then Z has a fundamental matrix.

Proof

If \(m\le 3\), by adding point pairs if necessary we can assume \(m=3\). One can always construct an invertible matrix H such that \(y_1 \sim H x_1\) which implies that \(y_1 \times H x_1 = 0\) and equation (21) is satisfied.

Let us now consider the case \(m = 4\). Since \(\text {rank}(Z) = 4\), by Lemma 7, \(\text {rank}(X) \ge 2\) and \(\text {rank}(Y) \ge 2\). If we can find two indices i and j such that the matrices \(\begin{pmatrix} x_i&x_j\end{pmatrix}\) and \(\begin{pmatrix} y_i&y_j\end{pmatrix}\) both have rank 2 then we can construct an invertible matrix H such that \(y_i \sim H x_i\) and \(y_j \sim H x_j\) and that would be enough for (21). Without loss of generality let us assume that the matrix \(\begin{pmatrix} x_1&x_2\end{pmatrix}\) is of rank 2, i.e., \(x_1 \not \sim x_2\). If \(\begin{pmatrix} y_1&y_2\end{pmatrix}\) has rank 2 we are done. So let us assume that this is not the case and \(y_2 \sim y_1\). Since \(\text {rank}(Y) \ge 2\), we can without loss of generality assume that \(y_3 \not \sim y_1\). Since \(x_1 \not \sim x_2\), either, \(x_3 \not \sim x_1\) or \(x_3 \not \sim x_2\). In the former case, \(i = 1, j = 3\) is the pair we want, otherwise \(i = 2, j = 3\) is the pair we want. \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Agarwal, S., Lee, HL., Sturmfels, B. et al. On the Existence of Epipolar Matrices. Int J Comput Vis 121, 403–415 (2017). https://doi.org/10.1007/s11263-016-0949-7

Download citation

Keywords

  • Structure from motion
  • Epipolar geometry
  • Algebraic geometry