Skip to main content
Log in

Feature matching based on unsupervised manifold alignment

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Feature-based methods for image registration frequently encounter the correspondence problem. In this paper, we formulate feature-based image registration as a manifold alignment problem, and present a novel matching method for finding the correspondences among different images containing the same object. Different from the semi-supervised manifold alignment, our methods map the data sets to the underlying common manifold without using correspondence information. An iterative multiplicative updating algorithm is proposed to optimize the objective, and its convergence is guaranteed theoretically. The proposed approach has been tested for matching accuracy, and robustness to outliers. Its performance on synthetic and real images is compared with the state-of-the-art reference algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Deriche, R., Zhang, Z., Luong, Q.T., Faugeras, O.: Robust recovery of the epipolar geometry for an uncalibrated stereo rig. In third European conference on computer vision, Springer-Verlag, Stockholm, pp. 567–576 (1994)

  2. Besl, P.J., McKay, N.D.: A method for registration of 3D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–254 (1992)

    Article  Google Scholar 

  3. Chui, H., Rangarajan, A.: A new point matching algorithm for non-rigid registration. Comput. Vis. Imag. Underst. 89(2), 114–141 (2003)

    Article  MATH  Google Scholar 

  4. Fatih Demirci, M.: Graph-based shape indexing. Mach. Vis. Appl. 23(3), 541–555 (2012)

    Article  Google Scholar 

  5. Jiang, H., Ngo, C.: Graph based image matching. In: Proceedings of the 17th International Conference on Pattern Recognition, pp. 658–661 (2004)

  6. Scott, G., Longuet-Higgins, H.: An algorithm for associating the features of two patterns. In: Proceedings of Biological Sciences, vol. 244, pp. 21–26 (1991)

  7. Shapiro, L.S., Brady, J.M.: Feature-based correspondence: an eigenvector approach. Imag. Vis. Comput. 10(5), 283–288 (1992)

    Article  Google Scholar 

  8. Sam Ge, Shuzhi, Guan, Feng, Pan, Yaozhang, Lo, Ai Poh: Neighborhood linear embedding for intrinsic structure discovery. Mach. Vis. Appl. 21(3), 391–401 (2010)

    Article  Google Scholar 

  9. Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)

    Article  Google Scholar 

  10. Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15, 1373–1396 (2003)

    Article  MATH  Google Scholar 

  11. Tenenbaum, J., De Silva, V., Langford, J.: A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000)

    Article  Google Scholar 

  12. He, X., Niyogi, P.: Locality preserving projections. In: Advances of Neural Information Processing Systems (NIPS), vol. 16, pp. 1–8 (2003)

  13. He, X., Cai, D., Yan, S., Zhang, H.-J.: Neighborhood preserving embedding. In: Proceedings of the 10th IEEE International Conference on Computer Vision, vol. 2, pp. 1208–1213 (2005)

  14. Ham, J., Lee, D., Saul, L.: Learning high dimensional correspondence from low dimensional manifolds. In: Workshop on the Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining, pp. 34–41 (2003)

  15. Ham, J., Lee, D., Saul, L.: Semisupervised alignment of manifolds. In: Proceedings of the 8th International Workshop on Artificial Intelligence and Statics, pp. 1–8 (2005)

  16. Verbeek, J., Roweis, S., Vlassis, N.: Non-linear CCA and PCA by alignment of local models. In: Advances in Neural Information Processing Systems, vol. 16, pp. 1–8 (2004)

  17. Verbeek, J., Vlassis, N.: Gaussian fields for semi-supervised regression and correspondence learning. Pattern Recogn. 39(10), 1864–1875 (2006)

    Article  MATH  Google Scholar 

  18. Lafon, S., Keller, Y., Coifman, R.: Data fusion and multicue data matching by diffusion maps. IEEE Trans. Pattern Anal. Mach. Intell. 28(11), 1784–1797 (2006)

    Google Scholar 

  19. Zhai, D., Li, B., Chang, H., Shan, S., Chen, X., Gao, W.: Manifold alignment via corresponding projections. In: Proceedings of the British Machine Vision Conference, vol. 3, pp. 1–11 (2010)

  20. Wang, C., Mahadevan, S.: Manifold alignment without correspondence. In: Proceedings of the 21st International Joint Conference on Artificial Intelligence, pp. 1273–1278 (2009)

  21. Xiong, L., Wang, F., Zhang, C.: Semi-definite manifold alignment. In: Proceedings of the 18th European Conference on Machine Learning, pp. 773–781 (2007)

  22. Zhang, Limei, Qiao, Lishan, Chen, Songcan: Graph-optimized locality preserving projections. Pattern Recogn. 43(6), 1993–2002 (2010)

    Google Scholar 

  23. Zhang, Honggang, Deng, Weihong, Guo, Jun, Yang, Jie: Locality preserving and global discriminant projection with prior information. Mach. Vis. Appl. 21(4), 577–585 (2010)

    Article  Google Scholar 

  24. Papadimitriou, C., Steiglitz, K.: Combinatorial optimization: algorithms and complexity. Dover Publications, Mineola (1998)

    MATH  Google Scholar 

  25. Wang, H.F., Hancock, E.R.: A kernel view of spectral point pattern matching. Struct. Syntactic Stat. Pattern Recogn. Proc. 3138, 361–369 (2004)

    Article  Google Scholar 

  26. Harris, C., Stephens, M.J.: A combined corner and edge detector. In: Proceedings of 4th Alvey Vision Conference, pp. 147–151 (1988)

  27. Szeliski, R.: Computer vision: algorithms and applications. Springer, New York (2010)

  28. Lowe, D.G.: Distinctive image features from scale-invariant key points. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  29. Lee, D., Seung, H.S.: Algorithms for non-negative matrix factorization. Advances in neural information processing systems, vol. 13, MIT Press (2001)

  30. Cai, Deng, He, Xiaofei, Han, Jiawei, Huang, Thomas S.: Graph regularized nonnegative matrix factorization for data representation. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1548–1560 (2011)

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61201323, 60972150), Northwestern Polytechnical University Foundation of Fundamental Research (No. JC20110277), and Northwestern Polytechnical University Doctoral Dissertation Innovation Foundation (No. CX200819).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weidong Yan.

Appendix (Proofs of Theorem 1)

Appendix (Proofs of Theorem 1)

To prove Theorem 1, we need to show that \(J\) is non-increasing under the update steps in Eqs. (19) and (20). First, we need to prove that \(J\) is non-increasing under the update step in Eq. (19). We will follow the similar procedure described in [29]. We use the auxiliary function approach to prove the convergence. Here we first introduce the definition of auxiliary function.

Definition

\(G(u,u^{{\prime }})\) is an auxiliary function for \(F(u)\) if the conditions

$$\begin{aligned} G(u,u^{{\prime }})\ge F(u),\quad G(u,u)=F(u) \end{aligned}$$
(30)

are satisfied.

Lemma 1

If \(G\) is an auxiliary function of \(F\), then \(F\) is non-increasing under the update

$$\begin{aligned} u^{(t+1)}=\mathop {\arg \min }\limits _u G(u,u^{(t)}) \end{aligned}$$
(31)

Proof

\(F(u^{(t+1)})\!\le \! G(u^{(t+1)},u^{(t)})\!\le \! G(u^{(t)},u^{(t)})\!=\!F(u^{(t)})\)

Considering any element \(a_{kl} \) in \(A\), we use \(F\) to denote the part of \(J\) which is only relevant to \(a_{kl} \). It is easy to check that

$$\begin{aligned} F^{{\prime }}(a_{kl} )&\!= \left( {\frac{\partial J}{\partial A}} \right)_{kl} \!=\!(2XH^{x}X^{T}A\nonumber \\&\qquad \qquad \qquad \!-\!2XHY^{T}B\!+\!2\lambda _1 XM^{x}X^{T}A)_{kl}\end{aligned}$$
(32)
$$\begin{aligned} F^{{\prime }{\prime }}(a_{kl} ) = \frac{\partial (2XH^{x}X^{T}A\!-\!2XHY^{T}B\!+\!2\lambda _1 XM^{x}X^{T}A)_{kl} }{\partial a_{kl} } \nonumber \\&\!\!= 2\frac{\partial (XH^{x}X^{T}A)_{kl} }{\partial a_{kl} }\!\!+\!2\lambda _1 \frac{\partial (XM^{x}X^{T}A)_{kl} }{\partial a_{kl} } \nonumber \\&\!\!= 2\frac{\partial \sum _i {(XH^{x}X^{T})_{ki} a_{il} } }{\partial a_{kl} }\!+\!\!2\lambda _1 \frac{\partial \sum _j {(XM^{x}X^{T})_{kj} a_{jl} } }{\partial a_{kl} } \nonumber \\&\!\!= 2(XH^{x}X^{T})_{kk} \!+\!2\lambda _1 (XM^{x}X^{T})_{kk} \end{aligned}$$
(33)

Lemma 2

Function

$$\begin{aligned} G(a,a_{kl}^{(t)} ) = F(a_{kl}^{(t)} )\!+\!F^{{\prime }}(a_{kl}^{(t)} )(a\!-\!a_{kl}^{(t)} ) \nonumber \\&\!+\frac{(XH^{x}X^{T}A)_{kl} \!+\!\lambda _1 (XM^{x}X^{T}A)_{kl} }{a_{kl}^{(t)} }(a\!-\!a_{kl}^{(t)} )^{2}\nonumber \\ \end{aligned}$$
(34)

is an auxiliary function for \(F(a)\).

Proof

Since \(G(a,a)=F(a)\) is obvious, we need only show that \(G(a,a_{kl}^{(t)} )\ge F(a)\). To do this, we compare the Taylor series expansion of \(F(a)\)

$$\begin{aligned} F(a) = F(a_{kl}^{(t)} )\!+\!F^{{\prime }}(a_{kl}^{(t)} )(a\!-\!a_{kl}^{(t)} )\!+\!\frac{1}{2}F^{{\prime }{\prime }}(a_{kl}^{(t)} )(a\!-\!a_{kl}^{(t)} )^{2} \nonumber \\ = F(a_{kl}^{(t)} )\!+\!F^{{\prime }}(a_{kl}^{(t)} )(a\!-\!a_{kl}^{(t)} )\!+\![(XH^{x}X^{T})_{kk} \nonumber \\&+\lambda _1 (XM^{x}X^{T})_{kk} ](a\!-\!a_{kl}^{(t)} )^{2} \end{aligned}$$
(35)

With Eq. (34) to find that \(G(a,a_{kl}^{(t)} )\ge F(a)\) is equivalent to [30]

$$\begin{aligned}&\frac{(XH^{x}X^{T}A)_{kl} \!+\lambda _1 (XM^{x}X^{T}A)_{kl} }{a_{kl}^{(t)} }\nonumber \\&\quad \ge (XH^{x}X^{T})_{kk}+\lambda _1 (XM^{x}X^{T})_{kk} \end{aligned}$$
(36)

Since

$$\begin{aligned} (XH^{x}X^{T}A)_{kl} =\sum _{i=1} {(XH^{x}X^{T})_{ki} a_{il}^{(t)} } \ge (XH^{x}X^{T})_{kk} a_{kl}^{(t)}\nonumber \\ \end{aligned}$$
(37)

and

$$\begin{aligned} \begin{aligned} \lambda _1 (XM^{x}X^{T}A)_{kl}=\,&\lambda _1 \sum _{j=1} {(XM^{x}X^{T})_{kj} a_{jl}^{(t)} }\\ \ge \,&\lambda _1 (XM^{x}X^{T})_{kk} a_{kl}^{(t)} \end{aligned} \end{aligned}$$
(38)

Thus, \(G(a,a_{kl}^{(t)} )\ge F(a)\) \(\square \)

We can now demonstrate the convergence of Theorem 1:

Proof of Theorem 1

\(a^{(t+1)}=\mathop {\arg \min }\limits _a G(a,a^{(t)})\), and

$$\begin{aligned} G(a,a_{kl}^{(t)} )&= F(a_{kl}^{(t)} )+F^{{\prime }}(a_{kl}^{(t)} )(a-a_{kl}^{(t)} ) \nonumber \\&+\frac{(XH^{x}X^{T}A)_{kl}\!+\!\lambda _1 (XM^{x}X^{T}A)_{kl} }{a_{kl}^{(t)} }(a\!-\!a_{kl}^{(t)} )^{2}\nonumber \\ \end{aligned}$$
(39)

The minimum of \(G(a,a^{(t)})\) with respect to \(a\) is determined by setting the gradient to zeros \(\frac{\partial G(a,a^{(t)})}{\partial a}=0\):

$$\begin{aligned}&\frac{\partial G(a,a_{kl}^{(t)} )}{\partial a}\!=\!F^{{\prime }}(a_{kl}^{(t)} )\nonumber \\&\;\;+ \frac{2(XH^{x}X^{T}A)_{kl} \!+\!2\lambda _1 (XM^{x}X^{T}A)_{kl} }{a_{kl}^{(t)} }(a\!-\!a_{kl}^{(t)} )\!=\!0 \nonumber \\ \end{aligned}$$
(40)

So, we get

$$\begin{aligned} a_{kl}^{(t+1)} =a_{kl}^{(t)} \frac{(XHY^{T}B)_{kl} }{(XH^{x}X^{T}A)_{kl} +(\lambda _1 XM^{x}X^{T}A)_{kl} } \end{aligned}$$
(41)

Since \(G(a,a^{(t)})\) is an auxiliary function, \(F\) is non-increasing under this update rule. \(J\) can similarly be shown to be non-increasing under the update rules for \(B\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yan, W., Tian, Z., Duan, X. et al. Feature matching based on unsupervised manifold alignment. Machine Vision and Applications 24, 983–994 (2013). https://doi.org/10.1007/s00138-012-0479-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-012-0479-4

Keywords

Navigation