Skip to main content
Log in

Enforcing consistency constraints in uncalibrated multiple homography estimation using latent variables

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

An approach is presented for estimating a set of interdependent homography matrices linked together by latent variables. The approach allows enforcement of all underlying consistency constraints while accounting for the arbitrariness of the scale of each individual matrix. The input data is assumed to be in the form of a set of homography matrices individually estimated from image data with no regard to the consistency constraints, appended by a set of error covariances, each characterising the uncertainty of a corresponding homography matrix. A statistically motivated cost function is introduced for upgrading, via optimisation, the input data to a set of homography matrices satisfying the constraints. The function is invariant to a change of any of the individual scales of the input matrices. The proposed approach is applied to the particular problem of estimating a set of homography matrices induced by multiple planes in the 3D scene between two views. An optimisation algorithm for this problem is developed that operates on natural underlying latent variables, with the use of those variables ensuring that all consistency constraints are satisfied. Experimental results indicate that the algorithm outperforms previous schemes proposed for the same task and is fully comparable in accuracy with the ‘gold standard’ bundle adjustment technique, rendering the whole approach both of practical and theoretical interest. With a view to practical application, it is shown that the proposed algorithm can be incorporated into the familiar random sampling and consensus technique, so that the resulting modified scheme is capable of robust fitting of fully consistent homographies to data with outliers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. The following examples illustrate the logic behind the definition of vector transposition:

    $$\begin{aligned} \mathop {\left[ \begin{array}{ll} a_{11} &{} a_{12} \\ a_{21} &{} a_{22} \\ a_{31} &{} a_{32} \\ a_{41} &{} a_{42} \\ a_{51} &{} a_{52} \\ a_{61} &{} a_{62} \\ \end{array}\right] }\nolimits ^{(2)} = \left[ \begin{array}{lll} a_{11} &{} a_{31} &{} a_{51} \\ a_{21} &{} a_{41} &{} a_{61} \\ a_{12} &{} a_{32} &{} a_{52} \\ a_{22} &{} a_{42} &{} a_{62} \end{array}\right] , \quad \left[ {\begin{array}{ll} a_{11} &{} a_{12} \\ a_{21} &{} a_{22} \\ a_{31} &{} a_{32} \\ a_{41} &{} a_{42} \\ a_{51} &{} a_{52} \\ a_{61} &{} a_{62} \\ \end{array}}\right] ^{\scriptscriptstyle (3)} = \left[ \begin{array}{ll} a_{11} &{} a_{41} \\ a_{21} &{} a_{51} \\ a_{31} &{} a_{61} \\ a_{12} &{} a_{42} \\ a_{22} &{} a_{52} \\ a_{32} &{} a_{62} \\ \end{array}\right] . \end{aligned}$$
  2. The non-negative definite square root \(\mathbf {C}^{1/2}\) of a symmetric non-negative definite matrix \(\mathbf {C}\) is defined as follows: If \(\mathbf {C} = \mathbf {U} \mathbf {D} \mathbf {U}^\top \) is the eigenvalue decomposition of \(\mathbf {C}\) with \(\mathbf {U}\) an orthogonal matrix and \(\mathbf {D}\) a diagonal matrix comprising the (non-negative) eigenvalues of \(\mathbf {C}\), then \(\mathbf {C}^{1/2} = \mathbf {U} \mathbf {D} ^{1/2} \mathbf {U}^\top \), where \(\mathbf {D} ^{1/2}\) is the diagonal matrix containing the square roots of the respective entries of \(\mathbf {D}\).

  3. http://www.robots.ox.ac.uk/~vgg/data/data-mview.html.

  4. http://cs.adelaide.edu.au/~hwong/doku.php?id=data.

  5. http://www.cvl.isy.liu.se/research/datasets/traffic-signs-dataset/download/.

References

  1. Albert, A.: Regression and the Moore–Penrose Pseudoinverse. Academic Press, New York (1972)

    MATH  Google Scholar 

  2. Baker, S., Datta, A., Kanade, T.: Parameterizing homographies. Tech. Rep. CMU-RI-TR-06-11. Robotics Institute, Carnegie Mellon University, Pittsburgh (2006)

  3. Chen, P., Suter, D.: Error analysis in homography estimation by first order approximation tools: a general technique. J. Math. Imaging Vis. 33(3), 281–295 (2009)

    Article  MathSciNet  Google Scholar 

  4. Chen, P., Suter, D.: Rank constraints for homographies over two views: revisiting the rank four constraint. Int. J. Comput. Vis. 81(2), 205–225 (2009)

    Article  Google Scholar 

  5. Chin, T.J., Wang, H., Suter, D.: The ordered residual kernel for robust motion subspace clustering. In: Adv. Neural Inf. Process. Syst., vol. 22, pp. 333–341 (2009)

  6. Chin, T.J., Yu, J., Suter, D.: Accelerated hypothesis generation for multistructure data via preference analysis. IEEE Trans. Pattern Anal. Mach. Intell. 34(4), 625–638 (2012)

    Article  Google Scholar 

  7. Chojnacki, W., Brooks, M.J.: On the consistency of the normalized eight-point algorithm. J. Math. Imaging Vis. 28(1), 19–27 (2007)

    Article  MathSciNet  Google Scholar 

  8. Chojnacki, W., Brooks, M.J., van den Hengel, A., Gawley, D.: On the fitting of surfaces to data with covariances. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1294–1303 (2000)

    Article  Google Scholar 

  9. Chojnacki, W., Brooks, M.J., van den Hengel, A., Gawley, D.: Revisiting Hartley’s normalized eight-point algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 25(9), 1172–1177 (2003)

    Article  Google Scholar 

  10. Chojnacki, W., Brooks, M.J., van den Hengel, A., Gawley, D.: From FNS to HEIV: a link between two vision parameter estimation methods. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 264–268 (2004)

    Article  Google Scholar 

  11. Chojnacki, W., Brooks, M.J., van den Hengel, A., Gawley, D.: FNS, CFNS and HEIV: a unifying approach. J. Math. Imaging Vis. 23(2), 175–183 (2005)

    Article  Google Scholar 

  12. Chojnacki, W., van den Hengel, A.: A dimensionality result for multiple homography matrices. In: Proceedings of the 13th International Conference of Computer Vision, pp. 2104–2109 (2011)

  13. Chojnacki, W., van den Hengel, A.: On the dimension of the set of two-view multi-homography matrices. Complex Anal. Oper. Theory 7(2), 465–484 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  14. Chojnacki, W., Hill, R., van den Hengel, A., Brooks, M.J.: Multi-projective parameter estimation for sets of homogeneous matrices. In: Proceedings of the International Conference on Digital Image Computing: Techniques and Applications, pp. 119–124 (2009)

  15. Chojnacki, W., Szpak, Z., Brooks, M.J., van den Hengel, A.: Multiple homography estimation with full consistency constraints. In: Proceedings of the International Conference on Digital Image Computing: Techniques and Applications, pp. 480–485 (2010)

  16. Csurka, G., Zeller, C., Zhang, Z., Faugeras, O.D.: Characterizing the uncertainty of the fundamental matrix. Comput. Vis. Image Underst. 68(1), 18–36 (1997)

    Article  Google Scholar 

  17. Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Kluwer, Dordrecht (1996)

    Book  MATH  Google Scholar 

  18. Fouhey, D.F., Scharstein, D., Briggs, A.J.: Multiple plane detection in image pairs using J-linkage. In: Proceedings of the 20th International Conference on Pattern Recognition, pp. 336–339 (2010)

  19. Fusiello, A.: A matter of notation: several uses of the Kronecker product in 3D computer vision. Pattern Recognit. Lett. 28(15), 2127–2132 (2007)

    Article  Google Scholar 

  20. Gao, J., Kim, S.J., Brown, M.S.: Constructing image panoramas using dual-homography warping. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 49–56 (2011)

  21. Golub, G.H., Pereyra, V.: The differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate. SIAM J. Numer. Anal. 10(2), 413–432 (1973)

    Article  MATH  MathSciNet  Google Scholar 

  22. Haralick, R.M.: Propagating covariance in computer vision. Int. J. Pattern Recognit. Artif. Intell. 10(5), 561–572 (1996)

    Article  Google Scholar 

  23. Hartley, R.: In defense of the eight-point algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 19(6), 580–593 (1997)

    Article  Google Scholar 

  24. Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  25. Kähler, O., Denzler, J.: Rigid motion constraints for tracking planar objects. In: Proceedings of the 29th DAGM Symposium. Lecture Notes in Computer Science, vol. 4713, pp. 102–111 (2007)

  26. Kanatani, K.: Statistical Optimization for Geometric Computation: Theory and Practice. Elsevier, Amsterdam (1996)

    MATH  Google Scholar 

  27. Kanatani, K., Morris, D.D.: Gauges and gauge transformations for uncertainty description of geometric structure with indeterminacy. IEEE Trans. Inf. Theory 47(5), 2017–2028 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  28. Kanatani, K., Ohta, N., Kanazawa, Y.: Optimal homography computation with a reliability measure. IEICE Trans. Inf. Syst. E83-D(7), 1369–1374 (2000)

  29. Kanazawa, Y., Kawakami, H.: Detection of planar regions with uncalibrated stereo using distribution of feature points. In: Proceedings of the 15th British Machine Vision Conf., pp. 247–256 (2004)

  30. Larsson, F., Felsberg, M.: Using Fourier descriptors and spatial models for traffic sign recognition. In: Proceedings of the 17th Scandinavian Conference on Image Analysis. Lecture Notes in Computer Science, vol. 6688, pp. 238–249 (2011)

  31. Leedan, Y., Meer, P.: Heteroscedastic regression in computer vision: problems with bilinear constraint. Int. J. Comput. Vis. 37(2), 127–150 (2000)

    Article  MATH  Google Scholar 

  32. Lütkepol, H.: Handbook of Matrices. Wiley, Chichester (1996)

    Google Scholar 

  33. Ma, Y., Soatto, S., Košecká, J., Sastry, S.S.: An Invitation to 3-D Vision: From Images to Geometric Models, 2nd edn. Springer, New York (2005)

    Google Scholar 

  34. Magnus, J.R., Neudecker, H.: Matrix Differential Calculus with Applications in Statistics and Econometrics. Wiley, Chichester (1988)

    MATH  Google Scholar 

  35. Matei, B., Meer, P.: Estimation of nonlinear errors-in-variables models for computer vision applications. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1537–1552 (2006)

    Article  Google Scholar 

  36. Mittal, S., Anand, S., Meer, P.: Generalized projection-based M-estimator. IEEE Trans. Pattern Anal. Mach. Intell. 34(12), 2351–2364 (2012)

    Article  Google Scholar 

  37. Penrose, R.: A generalized inverse for matrices. Math. Proc. Camb. Philos. Soc. 51(3), 406–413 (1955)

    Article  MATH  MathSciNet  Google Scholar 

  38. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes in C. Cambridge University Press, Cambridge (1995)

    Google Scholar 

  39. Scoleri, T., Chojnacki, W., Brooks, M.J.: A multi-objective parameter estimator for image mosaicing. In: Proceedings of IEEE International Symposium on Signal Processing and its Applications, vol. 2, pp. 551–554 (2005)

  40. Shashua, A., Avidan, S.: The rank 4 constraint in multiple (\(\ge 3\)) view geometry. In: Proceedings of the 4th European Conference on Computer Vision. Lecture Notes in Computer Science, vol. 1065, pp. 196–206 (1996)

  41. Stewart, G.W.: On the continuity of the generalized inverse. SIAM J. Appl. Math. 17(1), 33–45 (1969)

    Article  MATH  MathSciNet  Google Scholar 

  42. Szpak, Z.L., Chojnacki, W., Eriksson, A., van den Hengel, A.: Sampson distance based joint estimation of multiple homographies with uncalibrated cameras. Comput. Vis. Image Underst. 125, 200–213 (2014)

    Article  Google Scholar 

  43. Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Bundle adjustment—a modern synthesis. In: Proceedings of the International Workshop on Vision Algorithms. Lecture Notes in Computer Science, vol. 1883, pp. 298–372 (1999)

  44. Vincent, E., Laganiere, R.: Detecting planar homographies in an image pair. In: Proceedings of the 2nd International Symposium on Image and Signal Processing Analysis, pp. 182–187 (2001)

  45. Wong, H.S., Chin, T.J., Yu, J., Suter, D.: Dynamic and hierarchical multi-structure geometric model fitting. In: Proceedings of the 13th International Conference on Computer Vision, pp. 1044–1051 (2011)

  46. Zelnik-Manor, L., Irani, M.: Multiview constraints on homographies. IEEE Trans. Pattern Anal. Mach. Intell. 24(2), 214–223 (2002)

    Article  Google Scholar 

Download references

Acknowledgments

This research was supported by the Australian Research Council.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wojciech Chojnacki.

Appendices

Appendix A. Covariance of the AML estimate

Here, we derive a formula for the covariance matrix of the AML estimate of a vectorised homography matrix based on a set of image correspondences. It will be convenient to establish first an expression for the covariance matrix of the AML estimate of a parameter vector of a certain general model. This model will comprise, as particular cases, models whose parameters describe a relationship among image feature locations. Once the general formula for a covariance matrix is established, we shall then evolve a specialised formula for the case of the homography model.

1.1 A1. General model

The data–parameter relationship for the general model will be assumed in the form

$$\begin{aligned} \mathbf {f}(\mathbf {z},\varvec{\beta }) = \mathbf {0}, \end{aligned}$$

where \(\mathbf {z}\) is a length-\(k\) vector describing an ideal (noiseless) data point, \(\varvec{\beta }\) is a length-\(l\) vector of parameters, and \(\mathbf {f}(\mathbf {z},\varvec{\beta })\) is a length-\(m\) vector of constraints of the form

$$\begin{aligned} \mathbf {f}(\mathbf {z},\varvec{\beta }) = \mathbf {U}(\mathbf {z})^\top \varvec{\beta }, \end{aligned}$$

where \(\mathbf {U}(\mathbf {z})\) is an \(l \times m\) matrix—the so-called data carrier matrix—with entries formed by smooth functions in \(\mathbf {z}\). Details on how this formulation applies to the homography model are given in Appendix A.2. It will be further assumed that the observed data points \(\mathbf {z}_1, \dots , \mathbf {z}_N\) come equipped with covariance matrices \(\varvec{\mathbf {\Lambda }}_{\mathbf {z}_1}^{}, \dots , \varvec{\mathbf {\Lambda }}_{\mathbf {z}_N}^{}\) quantifying measurement errors in the data. Under the assumption that the errors are independently sampled from Gaussian distributions with covariances of the form \(\varvec{\mathbf {\Lambda }}_{\mathbf {z}_n}^{}\), \(n=1, \dots , N\), the relevant AML cost function to fit the model parameters to the data is given by

$$\begin{aligned} J_{\mathrm {AML}}(\varvec{\beta }) = \sum _{n=1}^N \mathbf {f}(\mathbf {z},\varvec{\beta })^\top \varvec{\mathbf {\Sigma }}(\mathbf {z}_n, \varvec{\beta })^{-1} \mathbf {f}(\mathbf {z},\varvec{\beta }), \end{aligned}$$

where

$$\begin{aligned} \varvec{\mathbf {\Sigma }}(\mathbf {z}_n, \varvec{\beta }) = \partial _{\mathbf {z}}{\mathbf {f}(\mathbf {z}_n,\varvec{\beta })} \varvec{\mathbf {\Lambda }}_{\mathbf {z}_n}^{} [\partial _{\mathbf {z}}{\mathbf {f}(\mathbf {z}_n,\varvec{\beta })}]^\top \end{aligned}$$

(cf. [8, 10, 11, 26, 31, 35]). Importantly, when \(m\), the common length of the \(\mathbf {f}(\mathbf {z}_n,\varvec{\beta })\)’s, surpasses the codimension \(r\) of the submanifolds of the form \(\{\mathbf {z} \in \mathbb {R}^k \mid \mathbf {f}(\mathbf {z},\varvec{\beta }) = \mathbf {0} \}\) with \(\varvec{\beta }\) representing parameters under which the data might have been generated, the inverses \(\varvec{\mathbf {\Sigma }}(\mathbf {z}_n,\varvec{\beta })^{-1}\) in the above expression for \(J_{\mathrm {AML}}\) must be replaced by, say, the \(r\) -truncated pseudo-inverses \(\varvec{\mathbf {\Sigma }}(\mathbf {z}_n,\varvec{\beta })^+_r\) [17, 26]. Recall that the \(r\)-truncated pseudo-inverse of an \(m \times m\) matrix \(\mathbf {A}\), \(\mathbf {A}_r^+\), is defined as follows: If \(\mathbf {A} = \mathbf {U} \mathbf {D} \mathbf {V}^\top \) is the SVD of \(\mathbf {A}\), with \(\mathbf {D} = {{\mathrm{diag}}}(d_1, \dots , d_m)\), and if \(\mathbf {A}_r = \mathbf {U} \mathbf {D}_r \mathbf {V}^\top \) with \(\mathbf {D}_r = {{\mathrm{diag}}}(d_1, \dots , d_r, 0, \dots , 0)\) is the \(r\)-truncated SVD of \(\mathbf {A}\), then \(\mathbf {A}_r^+ = \mathbf {V} \mathbf {D}_r^+ \mathbf {U}^\top \) with \(\mathbf {D}_r^+ = {{\mathrm{diag}}}(d_1^+, \dots , d_r^+, 0, \dots , 0)\), where \(d_i^+ = d_i^{-1}\) when \(d_i \ne 0\) and \(d_i^+ = 0\) otherwise. The AML estimate of \(\varvec{\beta }\), \(\widehat{\varvec{\beta }}_{\mathrm {AML}}\), is the minimiser of \(J_{\mathrm {AML}}\). As a consequence of \(J_{\mathrm {AML}}\) being homogeneous of degree zero, \(\widehat{\varvec{\beta }}_{\mathrm {AML}}\) is determined only up to scale. The estimate \(\widehat{\varvec{\beta }}_{\mathrm {AML}}\) satisfies the necessary optimality condition

$$\begin{aligned}{}[\partial _{\varvec{\beta }}{J_{\mathrm {AML}}(\varvec{\beta })}]_{\varvec{\beta }= \widehat{\varvec{\beta }}_{\mathrm {AML}}} = \mathbf {0}^\top , \end{aligned}$$
(18)

which is the basis for all what follows. Using the formula

$$\begin{aligned} \mathbf {U}(\mathbf {z})^\top \varvec{\beta }= (\mathbf {I}_m \otimes \varvec{\beta }^\top ) {{\mathrm{vec}}}(\mathbf {U}(\mathbf {z})), \end{aligned}$$
(19)

one readily verifies that

$$\begin{aligned}{}[\partial _{\varvec{\beta }}{J_{\mathrm {AML}}(\varvec{\beta })}]^\top = 2 \mathbf {X}_{\varvec{\beta }} \varvec{\beta }, \end{aligned}$$

where \( \mathbf {X}_{\varvec{\beta }} = \mathbf {M}_{\varvec{\beta }} - \mathbf {N}_{\varvec{\beta }} \) is an \(l \times l\) symmetric matrix with

$$\begin{aligned} \mathbf {M}_{\varvec{\beta }}&= \sum _{n=1}^N \mathbf {U}_n \varvec{\mathbf {\Sigma }}_n^{-1} \mathbf {U}_n^T, \end{aligned}$$
(20a)
$$\begin{aligned} \mathbf {N}_{\varvec{\beta }}&= \sum _{n=1}^N \left( \varvec{\eta }_n^\top \otimes \mathbf {I}_l\right) \mathbf {B}_n (\varvec{\eta }_n\otimes \mathbf {I}_l), \end{aligned}$$
(20b)
$$\begin{aligned} \mathbf {U}_n&= \mathbf {U}(\mathbf {z}_n), \end{aligned}$$
(20c)
$$\begin{aligned} \mathbf {B}_n&= \partial _{\mathbf {z}_n}{{{\mathrm{vec}}}(\mathbf {U}_n )} \varvec{\mathbf {\Lambda }}_{\mathbf {z}_n}^{} [\partial _{\mathbf {z}_n}{{{\mathrm{vec}}}(\mathbf {U}_n )}]^\top , \end{aligned}$$
(20d)
$$\begin{aligned} \varvec{\mathbf {\Sigma }}_n&= \left( \mathbf {I}_m \otimes \varvec{\beta }^\top \right) \mathbf {B}_n (\mathbf {I}_m \otimes \varvec{\beta }), \end{aligned}$$
(20e)
$$\begin{aligned} \varvec{\eta }_n&= \varvec{\mathbf {\Sigma }}_n^{-1} \mathbf {U}_n^\top \varvec{\beta }. \end{aligned}$$
(20f)

Accordingly, Eq. (18) can be rewritten as

$$\begin{aligned} \mathbf {X}_{\hat{\varvec{\beta }}} \hat{\varvec{\beta }}= \mathbf {0}, \end{aligned}$$
(21)

where \(\widehat{\varvec{\beta }}_{\mathrm {AML}}\) is abbreviated to \(\hat{\varvec{\beta }}\) for clarity. Hereafter, \(\hat{\varvec{\beta }}=\hat{\varvec{\beta }}(\mathbf {z}_1, \dots , \mathbf {z}_N)\) will be assumed normalised and smooth as a function of \(\mathbf {z}_1, \dots , \mathbf {z}_N\).

To derive an expression for the covariance matrix of \(\hat{\varvec{\beta }}\), we use (21) in conjunction with the covariance propagation formula

$$\begin{aligned} \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} = \sum _{n=1}^N \partial _{\mathbf {z}_n}{\hat{\varvec{\beta }}} \varvec{\mathbf {\Lambda }}_{\mathbf {z}_n}^{} \left( \partial _{\mathbf {z}_n}{\hat{\varvec{\beta }}}\right) ^{\top } \end{aligned}$$
(22)

(cf. [16, 22]). Differentiating \(\Vert \hat{\varvec{\beta }}\Vert ^2 =1\) with respect to \(\mathbf {z}_n\) gives \( (\partial _{\mathbf {z}_n}{\hat{\varvec{\beta }}})^{\top }\hat{\varvec{\beta }}= \mathbf {0}. \) This together with (22) implies that

$$\begin{aligned} \hat{\varvec{\beta }}^\top \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} = \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} \hat{\varvec{\beta }}= \mathbf {0} \end{aligned}$$

so that \(\varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{}\) is singular, and further yields

$$\begin{aligned} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} = \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp = \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{}, \end{aligned}$$
(23)

where, of course, \( \mathbf {P}_{\hat{\varvec{\beta }}}^\perp = \mathbf {I}_{l} - \Vert \hat{\varvec{\beta }}\Vert ^{-2} \hat{\varvec{\beta }}\hat{\varvec{\beta }}^{\top }. \) Letting \(\mathbf {z}_n = [z_{n1} \dots z_{nk}]^\top \) and \(\hat{\varvec{\beta }}= [\widehat{\beta }_1, \dots , \widehat{\beta }_l]^\top \), and differentiating (21) with respect to \(z_{ni}\), we obtain

$$\begin{aligned} \left[ [\partial _{z_{ni}}{\mathbf {X}_{\varvec{\beta }}}]_{\varvec{\beta }= \hat{\varvec{\beta }}} + \sum _{j=1}^l [\partial _{\beta _j}{\mathbf {X}_{\varvec{\beta }}}]_{\varvec{\beta }= \hat{\varvec{\beta }}} \partial _{z_{ni}}{{\widehat{\beta }}_j} \right] \hat{\varvec{\beta }}+ \mathbf {X}_{\hat{\varvec{\beta }}} \partial _{z_{ni}}{\hat{\varvec{\beta }}} = \mathbf {0}. \end{aligned}$$

Introducing the Gauss–Newton approximation, i.e., neglecting the terms that contain \(\hat{\varvec{\beta }}^\top \mathbf {u}(\mathbf {z}_n)\), we reduce this equality to the equality

$$\begin{aligned} \mathbf {U}_n \varvec{\mathbf {\Sigma }}_n^{-1} (\partial _{z_{ni}}{\mathbf {U}_n})^T \hat{\varvec{\beta }}+ \mathbf {M}_{\hat{\varvec{\beta }}} \partial _{z_{ni}}{\hat{\varvec{\beta }}} = \mathbf {0}. \end{aligned}$$

Now, in view of (19) and the fact that

$$\begin{aligned} (\partial _{z_{ni}}{\mathbf {U}_n})^T \hat{\varvec{\beta }}&= {{\mathrm{vec}}}((\partial _{z_{ni}}{\mathbf {U}_n})^T\hat{\varvec{\beta }}) = {{\mathrm{vec}}}(\hat{\varvec{\beta }}^T \partial _{z_{ni}}{\mathbf {U}_n})\\&= \left( \mathbf {I}_m \otimes \hat{\varvec{\beta }}^\top \right) \partial _{z_{ni}}{{{\mathrm{vec}}}(\mathbf {U}_n)}, \end{aligned}$$

we have

$$\begin{aligned} \mathbf {M}_{\hat{\varvec{\beta }}} \partial _{z_{ni}}{\hat{\varvec{\beta }}}&= - \mathbf {U}_n \varvec{\mathbf {\Sigma }}_n^{-1}(\partial _{z_{ni}}{\mathbf {U}_n})^T \hat{\varvec{\beta }}\\&= - \mathbf {U}_n \varvec{\mathbf {\Sigma }}_n^{-1} \left( \mathbf {I}_m \otimes \hat{\varvec{\beta }}^\top \right) \partial _{z_{ni}}{{{\mathrm{vec}}}(\mathbf {U}_n)} \end{aligned}$$

and further

$$\begin{aligned} \mathbf {M}_{\hat{\varvec{\beta }}} \partial _{\mathbf {z}_n}{\hat{\varvec{\beta }}} = - \mathbf {U}_n \varvec{\mathbf {\Sigma }}_n^{-1} \left( \mathbf {I}_m \otimes \hat{\varvec{\beta }}^\top \right) \partial _{\mathbf {z}_n}{{{\mathrm{vec}}}(\mathbf {U}_n)}. \end{aligned}$$

Hence,

$$\begin{aligned}&\mathbf {M}_{\hat{\varvec{\beta }}} \partial _{\mathbf {z}_n}{\hat{\varvec{\beta }}} \varvec{\mathbf {\Lambda }}_{\mathbf {z}_n}^{} (\partial _{\mathbf {z}_n}{\hat{\varvec{\beta }}})^{\top } \mathbf {M}_{\hat{\varvec{\beta }}} \\&\quad = \mathbf {U}_n \varvec{\mathbf {\Sigma }}_n^{-1} (\mathbf {I}_m \otimes \hat{\varvec{\beta }}^\top ) \\&\quad \quad \times \partial _{\mathbf {z}_n}{{{\mathrm{vec}}}(\mathbf {U}_n)} \varvec{\mathbf {\Lambda }}_{\mathbf {z}_n}^{} [\partial _{\mathbf {z}_n}{{{\mathrm{vec}}}(\mathbf {U}_n)}]^\top (\mathbf {I}_m \otimes \hat{\varvec{\beta }}) \varvec{\mathbf {\Sigma }}_n^{-1} \mathbf {U}_n^\top . \end{aligned}$$

But, by (20d) and (20e),

$$\begin{aligned}&\left( \mathbf {I}_m \otimes \hat{\varvec{\beta }}^\top \right) \partial _{\mathbf {z}_n}{{{\mathrm{vec}}}(\mathbf {U}_n)} \varvec{\mathbf {\Lambda }}_{\mathbf {z}_n}^{} [\partial _{\mathbf {z}_n}{{{\mathrm{vec}}}(\mathbf {U}_n)}]^\top (\mathbf {I}_m \otimes \hat{\varvec{\beta }})\\&\quad = \left( \mathbf {I}_m \otimes \hat{\varvec{\beta }}^\top \right) \mathbf {B}_n \left( \mathbf {I}_m \otimes \hat{\varvec{\beta }}\right) =\varvec{\mathbf {\Sigma }}_n \end{aligned}$$

so

$$\begin{aligned} \mathbf {M}_{\hat{\varvec{\beta }}} \partial _{\mathbf {z}_n}{\hat{\varvec{\beta }}} \varvec{\mathbf {\Lambda }}_{\mathbf {z}_n}^{} (\partial _{\mathbf {z}_n}{\hat{\varvec{\beta }}})^{\top } \mathbf {M}_{\hat{\varvec{\beta }}}&= \mathbf {U}_n \varvec{\mathbf {\Sigma }}_n^{-1} \varvec{\mathbf {\Sigma }}_n \varvec{\mathbf {\Sigma }}_n^{-1} \mathbf {U}_n^\top \\&= \mathbf {U}_n \varvec{\mathbf {\Sigma }}_n^{-1} \mathbf {U}_n^\top . \end{aligned}$$

Therefore, in view of (20a),

$$\begin{aligned} \mathbf {M}_{\hat{\varvec{\beta }}} \left[ \sum _{n=1}^N \partial _{\mathbf {z}_n}{\hat{\varvec{\beta }}} \varvec{\mathbf {\Lambda }}_{\mathbf {z}_n}^{} (\partial _{\mathbf {z}_n}{\hat{\varvec{\beta }}})^{\top } \right] \mathbf {M}_{\hat{\varvec{\beta }}} = \sum _{n=1}^N \mathbf {U}_n \varvec{\mathbf {\Sigma }}_n^{-1} \mathbf {U}_n^\top = \mathbf {M}_{\hat{\varvec{\beta }}}, \end{aligned}$$

or equivalently, on account of (22),

$$\begin{aligned} \mathbf {M}_{\hat{\varvec{\beta }}} \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} \mathbf {M}_{\hat{\varvec{\beta }}} = \mathbf {M}_{\hat{\varvec{\beta }}}. \end{aligned}$$
(24)

At this stage, one might be tempted to conclude that \(\varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} = \mathbf {M}_{\hat{\varvec{\beta }}}^{-1}\), but this would contravene the fact that \(\varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{}\) is singular. To exploit (24) properly, we first note that, in view of (23),

$$\begin{aligned} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp = \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{}, \end{aligned}$$
(25)

so we can rewrite (24) as

$$\begin{aligned} \mathbf {M}_{\hat{\varvec{\beta }}} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp \mathbf {M}_{\hat{\varvec{\beta }}} = \mathbf {M}_{\hat{\varvec{\beta }}}. \end{aligned}$$

Pre- and post-multiplying the last equation by \(\mathbf {P}_{\hat{\varvec{\beta }}}^\perp \) and letting

$$\begin{aligned} \mathbf {M}_{\hat{\varvec{\beta }}}^\perp = \mathbf {P}_{\hat{\varvec{\beta }}}^\perp \mathbf {M}_{\hat{\varvec{\beta }}} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp \end{aligned}$$

yield

$$\begin{aligned} \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} \mathbf {M}_{\hat{\varvec{\beta }}}^\perp = \mathbf {M}_{\hat{\varvec{\beta }}}^\perp . \end{aligned}$$

Pre- and post-multiplying this equation by \((\mathbf {M}_{\hat{\varvec{\beta }}}^\perp )^{+}\) further yields

$$\begin{aligned} \left( \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \right) ^{+} \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \left( \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \right) ^{+} = \left( \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \right) ^{+} \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \left( \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \right) ^{+}. \end{aligned}$$
(26)

The matrix \(\mathbf {M}_{\hat{\varvec{\beta }}}^\perp \) is symmetric and its null space is, generically, spanned by \(\hat{\varvec{\beta }}\), so

$$\begin{aligned} \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \left( \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \right) ^{+} = (\mathbf {M}_{\hat{\varvec{\beta }}}^\perp )^{+} \mathbf {M}_{\hat{\varvec{\beta }}}^\perp = \mathbf {P}_{\hat{\varvec{\beta }}}^\perp \end{aligned}$$

(cf. [1, Cor. 3.5]). We also have \( (\mathbf {M}_{\hat{\varvec{\beta }}}^\perp )^{+} \mathbf {M}_{\hat{\varvec{\beta }}}^\perp (\mathbf {M}_{\hat{\varvec{\beta }}}^\perp )^{+} = (\mathbf {M}_{\hat{\varvec{\beta }}}^\perp )^{+} \) by virtue of one of the four defining properties of the pseudo-inverse [1, Thm. 3.9]. Therefore (26) can be restated as

$$\begin{aligned} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp = \left( \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \right) ^{+}, \end{aligned}$$

which, on account of (25), implies

$$\begin{aligned} \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} = (\mathbf {M}_{\hat{\varvec{\beta }}}^\perp )^{+}. \end{aligned}$$
(27)

We now derive an alternate formula for the covariance matrix of \(\hat{\varvec{\beta }}\), namely

$$\begin{aligned} \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} = \mathbf {P}_{\hat{\varvec{\beta }}}^\perp (\mathbf {M}_{\hat{\varvec{\beta }}})^{+}_{l-1} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp . \end{aligned}$$
(28)

In this form, \(\varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{}\) is explicitly expressed as \( \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{} = \mathbf {P}_{\hat{\varvec{\beta }}}^\perp \varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{0} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp , \) with the pre-covariance matrix \(\varvec{\mathbf {\Lambda }}_{\hat{\varvec{\beta }}}^{0} = (\mathbf {M}_{\hat{\varvec{\beta }}})^{+}_{l-1}\). We start by noting that, in view of (21), \(\hat{\varvec{\beta }}\) is in the null space \(\mathcal {N}(\mathbf {X}_{\hat{\varvec{\beta }}})\) of \(\mathbf {X}_{\hat{\varvec{\beta }}}\). Generically, we may assume that \(\mathcal {N}(\mathbf {X}_{\hat{\varvec{\beta }}})\) is spanned by \(\hat{\varvec{\beta }}\). As \(\mathbf {X}_{\hat{\varvec{\beta }}}\) is symmetric, the column space of \(\mathbf {X}_{\hat{\varvec{\beta }}}\) is equal to the orthogonal complement of \(\mathcal {N}(\mathbf {X}_{\hat{\varvec{\beta }}})\). In particular, \(\mathbf {X}_{\hat{\varvec{\beta }}}\) has rank \(l-1\). This together with \(\mathbf {X}_{\hat{\varvec{\beta }}}\) being equal to \(\mathbf {M}_{\hat{\varvec{\beta }}}\) to a first-order approximation implies that \(\mathbf {X}_{\hat{\varvec{\beta }}}\) is in fact approximately equal to the \((l-1)\)-truncated SVD of \(\mathbf {M}_{\hat{\varvec{\beta }}}\), \((\mathbf {M}_{\hat{\varvec{\beta }}})_{l-1}\). Since the function \(\mathbf {A} \mapsto \mathbf {A}^+\) is continuous on the set of all \(l \times l\) matrices of constant rank \(l-1\) [21, 34, 37, 41], we have, approximately,

$$\begin{aligned} \mathbf {X}_{\hat{\varvec{\beta }}}^{+} = (\mathbf {M}_{\hat{\varvec{\beta }}})^{+}_{l-1}. \end{aligned}$$

Taking into account that \( \mathbf {X}_{\hat{\varvec{\beta }}}^{+} = \mathbf {P}_{\hat{\varvec{\beta }}}^\perp \mathbf {X}_{\hat{\varvec{\beta }}}^{+} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp , \) which immediately follows from (21), we see that, again approximately,

$$\begin{aligned} \mathbf {X}_{\hat{\varvec{\beta }}}^{+}=\mathbf {P}_{\hat{\varvec{\beta }}}^\perp (\mathbf {M}_{\hat{\varvec{\beta }}})^{+}_{l-1} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp . \end{aligned}$$
(29)

As a consequence of \(\mathbf {M}_{\hat{\varvec{\beta }}}\) being approximately equal to \(\mathbf {X}_{\hat{\varvec{\beta }}}\), \(\mathbf {M}_{\hat{\varvec{\beta }}}^\perp \) (= \(\mathbf {P}_{\hat{\varvec{\beta }}}^\perp \mathbf {M}_{\hat{\varvec{\beta }}} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp \)) is approximately equal to \(\mathbf {P}_{\hat{\varvec{\beta }}}^\perp \mathbf {X}_{\hat{\varvec{\beta }}} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp = \mathbf {X}_{\hat{\varvec{\beta }}}\). Both \(\mathbf {M}_{\hat{\varvec{\beta }}}^\perp \) and \(\mathbf {X}_{\hat{\varvec{\beta }}}\) have rank \(l-1\), so their pseudo-inverses are also approximately equal,

$$\begin{aligned} \left( \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \right) ^+ = \mathbf {X}_{\hat{\varvec{\beta }}}^+, \end{aligned}$$

by the aforementioned continuity property of the pseudo-inverse. Hence, (29) can be restated as

$$\begin{aligned} \left( \mathbf {M}_{\hat{\varvec{\beta }}}^\perp \right) ^+ = \mathbf {P}_{\hat{\varvec{\beta }}}^\perp (\mathbf {M}_{\hat{\varvec{\beta }}})^{+}_{l-1} \mathbf {P}_{\hat{\varvec{\beta }}}^\perp , \end{aligned}$$

and this in combination with (27) yields (28).

In the case that the matrices \(\varvec{\mathbf {\Sigma }}(\mathbf {z}_n, \varvec{\beta })^{-1}\) are replaced by the matrices \(\varvec{\mathbf {\Sigma }}(\mathbf {z}_n,\varvec{\beta })^+_r\) in the expression for \(J_{\mathrm {AML}}\), a similar change also affects the matrices \(\mathbf {M}_{\hat{\varvec{\beta }}}\), \(\mathbf {N}_{\hat{\varvec{\beta }}}\), and \(\mathbf {X}_{\hat{\varvec{\beta }}}\). With \(\mathbf {M}_{\hat{\varvec{\beta }}}\) suitably modified, formulae (27) and (28) continue to hold.

1.2 A.2. Homography model

If a planar homography is represented by an invertible \(3 \times 3\) matrix \(\mathbf {H}\) and if \(\mathbf {m}' = [u', v', 1]^\top \) is the image of \(\mathbf {m} = [u, v, 1]^\top \) by that homography, then

$$\begin{aligned} \mathbf {m}' \simeq \mathbf {H}\, \mathbf {m}, \end{aligned}$$

where \(\,\simeq \,\) denotes equality up to scale. This relation can equivalently be written as

$$\begin{aligned}{}[\mathbf {m}']_{\times } \mathbf {H} \mathbf {m} = \mathbf {0}. \end{aligned}$$
(30)

With \(\varvec{\beta }= {{\mathrm{vec}}}(\mathbf {H})\), \(\mathbf {z} = [u,v,u',v']^\top \), and \( \mathbf {U}(\mathbf {z}) = - \mathbf {m} \otimes [\mathbf {m}']_{\times }, \) we have

$$\begin{aligned}{}[\mathbf {m}']_{\times } \mathbf {H} \mathbf {m} = \mathbf {U}(\mathbf {z})^\top \varvec{\beta }, \end{aligned}$$

and so (30) can be restated as

$$\begin{aligned} \mathbf {U}(\mathbf {z})^\top \varvec{\beta }= \mathbf {0}. \end{aligned}$$
(31)

The last relation encapsulates the homography model (for image motion) in the form conforming to the framework of Appendix A.1. Since the \(9 \times 3\) matrix \(\mathbf {U}(\mathbf {z})\) has rank \(2\), the three equations in (31) are linearly dependent and can be reduced—by deleting any one of them—to a system of two equations. For \(\varvec{\beta }\ne \mathbf {0}\), the reduced system gives two functionally independent constraints on \(\mathbf {z}\), and this has the consequence that the set of image correspondences \(\{ \mathbf {z} \in \mathbb {R}^4 \mid \mathbf {U}(\mathbf {z})^\top \varvec{\beta }= \mathbf {0} \}\) is a submanifold of \(\mathbb {R}^4\) of codimension \(2\).

Let \(\{\mathbf {m}_n, \mathbf {m}^{\prime }_n\}_{n=1}^N\) be a set of image correspondences based on which an AML estimate of a homography is to be evolved. For each \(n = 1, \dots , N\), write \(\mathbf {m}_n = [u_n,v_n,1]^\top \) and \(\mathbf {m}_n^{\prime } = [u_n^{\prime },v_n^{\prime },1]^\top \) and let \(\mathbf {z}_n = [u_n,v_n,u^{\prime }_n,v^{\prime }_n]^\top \). Suppose that each pair \(\mathbf {m}_n\), \(\mathbf {m}^{\prime }_n\) comes equipped with a pair of \(2 \times 2\) respective covariance matrices \(\varvec{\mathbf {\Lambda }}_{u_n,v_n}^{}\), \(\varvec{\mathbf {\Lambda }}_{u_n^{\prime },v_n^{\prime }}^{}\). For each \(n = 1, \dots , N\), let

$$\begin{aligned} \varvec{\mathbf {\Lambda }}_{\mathbf {z_n}}^{} = \begin{bmatrix} \varvec{\mathbf {\Lambda }}_{u_n,v_n}^{}&\mathbf {0} \\ \mathbf {0}&\varvec{\mathbf {\Lambda }}_{u^{\prime }_n,v^{\prime }_n}^{} \end{bmatrix}. \end{aligned}$$

Since \(\{ \mathbf {z} \in \mathbb {R}^4 \mid \mathbf {U}(\mathbf {z})^\top \varvec{\beta }= \mathbf {0} \}\) has codimension \(2\), the appropriate AML cost function is given by

$$\begin{aligned} J_{\mathrm {AML}}(\varvec{\beta }) = \sum _{n=1}^N \varvec{\beta }^\top \mathbf {U}(\mathbf {z}_n) [\varvec{\mathbf {\Sigma }}(\mathbf {z}_n,\varvec{\beta })]^+_2 \mathbf {U}(\mathbf {z}_n)^\top \varvec{\beta }, \end{aligned}$$

where

$$\begin{aligned} \varvec{\mathbf {\Sigma }}(\mathbf {z}_n,\varvec{\beta })&= \left( \mathbf {I}_3 \otimes \varvec{\beta }^\top \right) \mathbf {B}(\mathbf {z}_n) (\mathbf {I}_3 \otimes \varvec{\beta }),\\ \mathbf {B}(\mathbf {z}_n)&= [\partial _{\mathbf {z}}{{{\mathrm{vec}}}(\mathbf {U}(\mathbf {z}))}]_{\mathbf {z} = \mathbf {z}_n} \varvec{\mathbf {\Lambda }}_{\mathbf {z}_n}^{} \left[ [\partial _{\mathbf {z}}{{{\mathrm{vec}}}(\mathbf {U}(\mathbf {z}))}]_{\mathbf {z} = \mathbf {z}_n}\right] ^\top , \end{aligned}$$

and, explicitly,

$$\begin{aligned} \partial _{\mathbf {z}}{{{\mathrm{vec}}}(\mathbf {U}(\mathbf {z}))}&= - \Big [{{\mathrm{vec}}}(\mathbf {e}_1 \otimes [\mathbf {m}']_{\times }), {{\mathrm{vec}}}(\mathbf {e}_2 \otimes [\mathbf {m}']_{\times }), \\&\qquad \quad {{\mathrm{vec}}}(\mathbf {m} \otimes [\mathbf {e}_1]_{\times }), {{\mathrm{vec}}}(\mathbf {m} \otimes [\mathbf {e}_2]_{\times })\Big ], \end{aligned}$$

with \(\mathbf {e}_1 = [1, 0, 0]^\top \) and \(\mathbf {e}_2 = [0, 1, 0]^\top \). Now, on account of (28), the covariance matrix of the AML estimate \(\widehat{\varvec{\beta }}_{\mathrm {AML}} = {{\mathrm{vec}}}(\widehat{\mathbf {H}}_{\mathrm {AML}})\) can be explicitly expressed as

$$\begin{aligned} \varvec{\mathbf {\Lambda }}_{\widehat{\varvec{\beta }}_{\mathrm {AML}}}^{} = \mathbf {P}_{\widehat{\varvec{\beta }}_{\mathrm {AML}}}^\perp \varvec{\mathbf {\Lambda }}_{\widehat{\varvec{\beta }}_{\mathrm {AML}}}^{0} \mathbf {P}_{\widehat{\varvec{\beta }}_{\mathrm {AML}}}^\perp , \end{aligned}$$

where the pre-covariance matrix \(\varvec{\mathbf {\Lambda }}_{\widehat{\varvec{\beta }}_{\mathrm {AML}}}^{0}\) is given by

$$\begin{aligned} \varvec{\mathbf {\Lambda }}_{\widehat{\varvec{\beta }}_{\mathrm {AML}}}^{0}&= (\mathbf {M}_{\widehat{\varvec{\beta }}_{\mathrm {AML}}})^+_8,\\ \mathbf {M}_{\widehat{\varvec{\beta }}_{\mathrm {AML}}}&= \left\| \widehat{\varvec{\beta }}_{\mathrm {AML}} \right\| ^2 \sum _{n=1}^N \mathbf {U}(\mathbf {z}_n) \left[ \varvec{\mathbf {\Sigma }}(\mathbf {z}_n, \widehat{\varvec{\beta }}_{\mathrm {AML}})\right] ^+_2 \mathbf {U}(\mathbf {z}_n)^\top . \end{aligned}$$

Appendix B. Covariance of the DLT estimate

We finally derive the formula for the covariance matrix of the DLT estimate of a vectorised homography matrix under the assumption that the estimate is evolved from a normalised image data set.

Let \(\mathbf {T}\) and \(\mathbf {T}'\) be two transformations for normalising the coordinates of 2D image points,

$$\begin{aligned} \tilde{\mathbf {m}} = \mathbf {T} \mathbf {m} \quad \text {and} \quad \tilde{\mathbf {m}}' = \mathbf {T}' \mathbf {m}'. \end{aligned}$$

The maps \(\mathbf {T}\) and \(\mathbf {T}'\) induce the corresponding transformation of homographies given by

$$\begin{aligned} \tilde{\mathbf {H}} = \mathbf {T}' \mathbf {H} \mathbf {T}^{-1}. \end{aligned}$$

A defining characteristic of this latter transformation is that \( \mathbf {m}' \simeq \mathbf {H} \mathbf {m} \) holds precisely when \( \tilde{\mathbf {m}}' \simeq \tilde{\mathbf {H}} \tilde{\mathbf {m}}. \) With \( \varvec{\beta }= {{\mathrm{vec}}}(\mathbf {H}) \) and \( \tilde{\varvec{\beta }}= {{\mathrm{vec}}}(\tilde{\mathbf {H}}), \) the transformation of homographies becomes

$$\begin{aligned} \tilde{\varvec{\beta }}= (\mathbf {T}^{-\top } \otimes \mathbf {T}') \varvec{\beta }. \end{aligned}$$

Let \(\{\tilde{\mathbf {m}}_n, \tilde{\mathbf {m}}^{\prime }_n \}_{n=1}^N\) be a set of corresponding normalised 2D points. Set \(\tilde{\mathbf {z}}_n = [\tilde{{u}}_n,\tilde{{v}}_n, \tilde{{u}}^{\prime }_n, \tilde{{v}}^{\prime }_n]^\top \) for each pair \(\tilde{\mathbf {m}}_n = [\tilde{{u}}_n,\tilde{{v}}_n,1]^\top \) and \(\tilde{\mathbf {m}}^{\prime }_n = [\tilde{{u}}^{\prime }_n,\tilde{{v}}^{\prime }_n,1]^\top \), and let

$$\begin{aligned} \tilde{\mathbf {A}} = \sum _{n=1}^N \mathbf {U}(\tilde{\mathbf {z}}_n)\mathbf {U}(\tilde{\mathbf {z}}_n)^\top . \end{aligned}$$

The DLT estimate of \(\tilde{\varvec{\beta }}\), \(\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}\), based on \(\{\tilde{\mathbf {m}}_n, \tilde{\mathbf {m}}^{\prime }_n \}_{n=1}^N\) is defined as the minimiser \(\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}\) of the cost function

$$\begin{aligned} \tilde{J}_{\mathrm {DLT}}(\tilde{\varvec{\beta }}) = \frac{\tilde{\varvec{\beta }}^\top \tilde{\mathbf {A}} \tilde{\varvec{\beta }}}{\Vert \tilde{\varvec{\beta }}\Vert ^2} \end{aligned}$$

and coincides with the eigenvector of \(\tilde{\mathbf {A}}\) corresponding to the smallest value. The function \(\tilde{J}_{\mathrm {DLT}}\) is similar in form to the function \(J_{\mathrm {AML}}\)—the scalar quantity \(\Vert \varvec{\beta }\Vert ^2\) plays in \(\tilde{J}_{\mathrm {DLT}}\) the role of the matrices \(\varvec{\mathbf {\Sigma }}_n\) in \(J_{\mathrm {AML}}\). Exploiting this observation, one can immediately put forward an argument along the lines of Appendix A.1, showing that \(\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}\) has the covariance matrix in the form

$$\begin{aligned} \varvec{{\Lambda }}_{\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}} = \mathbf {P}_{\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}}^\perp \varvec{\mathbf {\Lambda }}_{\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}}^{0} \mathbf {P}_{\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}}^\perp , \end{aligned}$$

where the pre-covariance matrix \(\varvec{{\Lambda }}^{0}_{\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}}\) is given by

$$\begin{aligned} \varvec{{\Lambda }}^{0}_{\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}}&= \left( \mathbf {M}_{\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}}\right) _8^+ \mathbf {D}_{\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}}\left( \mathbf {M}_{\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}}\right) _8^+,\\ \mathbf {D}_{\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}}&= \left\| \widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}^\top \right\| ^{-2} \sum _{n=1}^N \mathbf {U}(\tilde{\mathbf {z}}_n) \varvec{\mathbf {\Sigma }}\left( \tilde{\mathbf {z}}_n,\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}\right) \mathbf {U}(\tilde{\mathbf {z}}_n)^\top . \end{aligned}$$

The details of the calculation leading to the above expression for \(\varvec{{\Lambda }}_{\widehat{\tilde{\varvec{\beta }}}_{\mathrm {DLT}}}\), analogous to those presented in Appendix A.1, are omitted.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chojnacki, W., Szpak, Z.L., Brooks, M.J. et al. Enforcing consistency constraints in uncalibrated multiple homography estimation using latent variables. Machine Vision and Applications 26, 401–422 (2015). https://doi.org/10.1007/s00138-015-0660-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-015-0660-7

Keywords

Navigation