Skip to main content

Covariance Weighted Procrustes Analysis

  • Chapter
Riemannian Computing in Computer Vision

Abstract

We revisit the popular Procrustes matching procedure of landmark shape analysis and consider the situation where the landmark coordinates have a completely general covariance matrix, extending previous approaches based on factored covariance structures. Procrustes matching is used to compute the Riemannian metric in shape space and is used more widely for carrying out inference such as estimation of mean shape and covariance structure. Rather than matching using the Euclidean distance we consider a general Mahalanobis distance. This approach allows us to consider different variances at each landmark, as well as covariance structure between the landmark coordinates, and more general covariance structures. Explicit expressions are given for the optimal translation and rotation in two dimensions and numerical procedures are used for higher dimensions. Simultaneous estimation of both mean shape and covariance structure is difficult due to the inherent non-identifiability. The method requires the specification of constraints to carry out inference, and we discuss some possible practical choices. We illustrate the methodology using data from fish silhouettes and mouse vertebra images.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 149.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bookstein FL (1986) Size and shape spaces for landmark data in two dimensions (with discussion). Stat Sci 1:181–242

    Article  MATH  Google Scholar 

  2. Brignell CJ (2007) Shape analysis and statistical modelling in brain imaging. Ph.D. thesis, University of Nottingham

    Google Scholar 

  3. Davies RH, Twining CJ, Taylor CJ (2008) Statistical models of shape: optimisation and evaluation. Springer, Heidelberg. http://www.springer.com/computer/computer+imaging/book/978-1-84800-137-4

  4. Dryden IL (1989) The statistical analysis of shape data. Ph.D. thesis, University of Leeds

    Google Scholar 

  5. Dryden IL (2014) Shapes: statistical shape analysis. R package version 1.1–10. http://CRAN.R-project.org/package=shapes.

  6. Dryden IL, Mardia KV (1998) Statistical shape analysis. Wiley, Chichester

    MATH  Google Scholar 

  7. Dutilleul P (1999) The MLE algorithm for the matrix normal distribution. J Stat Comput Simul 64:105–123

    Article  MATH  Google Scholar 

  8. Goodall CR (1991) Procrustes methods in the statistical analysis of shape (with discussion). J R Stat Soc Ser B 53:285–339

    MATH  MathSciNet  Google Scholar 

  9. Gower JC (1975) Generalized Procrustes analysis. Psychometrika 40:33–50

    Article  MATH  MathSciNet  Google Scholar 

  10. Kendall DG (1984) Shape manifolds, Procrustean metrics and complex projective spaces. Bull Lond Math Soc 16:81–121

    Article  MATH  MathSciNet  Google Scholar 

  11. Kendall DG (1989) A survey of the statistical theory of shape (with discussion). Stat Sci 4:87–120

    Article  MATH  MathSciNet  Google Scholar 

  12. Kendall DG, Barden D, Carne TK, Le H (1999) Shape and shape theory. Wiley, Chichester

    Book  MATH  Google Scholar 

  13. Koschat M, Swayne D (1991) A weighted procrustes criterion. Psychometrika 56(2):229–239. doi:10.1007/BF02294460. http://dx.doi.org/10.1007/BF02294460

    Google Scholar 

  14. Krim H, Yezzi AJ (2006) Statistics and analysis of shapes. Springer, Berlin

    Book  MATH  Google Scholar 

  15. Lele S (1993) Euclidean distance matrix analysis (EDMA): estimation of mean form and mean form difference. Math Geol 25(5):573–602. DOI 10.1007/BF00890247. http://dx.doi.org/10.1007/BF00890247

    Google Scholar 

  16. Mardia KV, Dryden IL (1989) The statistical analysis of shape data. Biometrika 76:271–282

    Article  MATH  MathSciNet  Google Scholar 

  17. Sharvit D, Chan J, Tek H, Kimia BB (1998) Symmetry-based indexing of image databases. J Vis Commun Image Represent 9(4):366–380

    Article  Google Scholar 

  18. Srivastava A, Klassen E, Joshi SH, Jermyn IH (2011) Shape analysis of elastic curves in Euclidean spaces. IEEE Trans Pattern Anal Mach Intell 33(7):1415–1428. http://doi.ieeecomputersociety.org/10.1109/TPAMI.2010.184

    Google Scholar 

  19. Srivastava A, Turaga PK, Kurtek S (2012) On advances in differential-geometric approaches for 2D and 3D shape analyses and activity recognition. Image Vis Comput 30(6–7):398–416

    Article  Google Scholar 

  20. Theobald DL, Wuttke DS (2006) Empirical Bayes hierarchical models for regularizing maximum likelihood estimation in the matrix gaussian procrustes problem. Proc Nat Acad Sci 103(49):18521–18527. doi:10.1073/pnas.0508445103. http://www.pnas.org/content/103/49/18521.abstract

    Google Scholar 

  21. Theobald DL, Wuttke DS (2008) Accurate structural correlations from maximum likelihood superpositions. PLoS Comput Biol 4(2):e43

    Article  Google Scholar 

  22. Younes L (2010) Shapes and diffeomorphisms. Applied mathematical sciences, vol 171. Springer, Berlin. doi:10.1007/978-3-642-12055-8. http://dx.doi.org/10.1007/978-3-642-12055-8

    Google Scholar 

Download references

Acknowledgements

We acknowledge the support of a Royal Society Wolfson Research Merit Award and EPSRC grant EP/K022547/1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christopher J. Brignell .

Editor information

Editors and Affiliations

Appendix

Appendix

Proof of Result 2.1.

Let \(v = \text{vec}(\mu -X\varGamma )\), then

$$ \displaystyle\begin{array}{rcl} D_{\mathrm{pCWP}}^{2}(X,\mu;\varSigma )& =& (v - (I_{ m} \otimes 1_{k})\gamma )^{T}\varSigma ^{-1}(v - (I_{ m} \otimes 1_{k})\gamma ) {}\\ & =& v^{T}\varSigma ^{-1}v - 2v^{T}\varSigma ^{-1}(I_{ m} \otimes 1_{k})\gamma {}\\ & & +\gamma ^{T}(I_{ m} \otimes 1_{k})^{T}\varSigma ^{-1}(I_{ m} \otimes 1_{k})\gamma. {}\\ \end{array} $$

The minimising translation is found by setting the first derivative equal to zero:

$$ \displaystyle\begin{array}{rcl} \frac{dD_{\mathrm{pCWP}}^{2}} {d\gamma } = -2(I_{m} \otimes 1_{k})^{T}\varSigma ^{-1}v + 2(I_{ m} \otimes 1_{k})^{T}\varSigma ^{-1}(I_{ m} \otimes 1_{k})\gamma = 0.& & {}\\ \end{array} $$

The second derivative is clearly positive because \(\varSigma ^{-1}\) is positive definite. Therefore, \(D_{\mathrm{pCWP}}^{2}\) is minimised when \(\gamma = [(I_{m} \otimes 1_{k})^{T}\varSigma ^{-1}(I_{m} \otimes 1_{k})]^{-1}(I_{m} \otimes 1_{k})\varSigma ^{-1}v\). \(\square \)

Proof of Result 2.2.

From Eq. (9.2) the minimising translation is \(\hat{\gamma }= A\text{vec}(\mu -X\varGamma )\), so for m = 2,

$$\displaystyle\begin{array}{rcl} \left [\begin{array}{c} \hat{\gamma }_{1}\\ \hat{\gamma }_{ 2} \end{array} \right ] = \left [\begin{array}{c} \alpha _{1} +\delta _{1}\cos \theta +\zeta _{1}\sin \theta \\ \alpha _{2} +\delta _{2}\cos \theta +\zeta _{2}\sin \theta \end{array} \right ],\text{ because, }\varGamma = \left [\begin{array}{cc} \cos \theta &\sin \theta \\ -\sin \theta &\cos \theta \end{array} \right ].& & {}\\ \end{array}$$

Therefore,

$$\displaystyle\begin{array}{rcl} \text{vec}(\mu -X\varGamma - 1_{k}\gamma ^{T})&& {}\\ & & = \left [\begin{array}{c} (\mu _{1} - 1_{k}\alpha _{1}) - (X_{1} + 1_{k}\delta _{1})\cos \theta + (X_{2} - 1_{k}\zeta _{1})\sin \theta \\ (\mu _{2} - 1_{k}\alpha _{2}) - (X_{2} + 1_{k}\delta _{2})\cos \theta - (X_{1} + 1_{k}\zeta _{2})\sin \theta \end{array} \right ],{}\\ \end{array}$$

and \(D_{\mathrm{pCWP}}^{2}(X,\mu;\varSigma ) = C + P\cos ^{2}\theta + Q\sin ^{2}\theta + R\cos \theta \sin \theta + S\cos \theta + T\sin \theta\) where

$$\displaystyle\begin{array}{rcl} C = \left [\begin{array}{c} (\mu _{1} - 1_{k}\alpha _{1}) \\ (\mu _{2} + 1_{k}\alpha _{2}) \end{array} \right ]^{T}\varSigma ^{-1}\left [\begin{array}{c} (\mu _{1} - 1_{k}\alpha _{1}) \\ (\mu _{2} + 1_{k}\alpha _{2}) \end{array} \right ].& & {}\\ \end{array}$$

Let \(\lambda\) be the real Lagrangian multiplier to enforce the constraint \(\cos ^{2}\theta +\sin ^{2}\theta = 1\) and let \(L = D_{\mathrm{pCWP}}^{2}(X,\mu;\varSigma ) +\lambda (1 -\cos ^{2}\theta -\sin ^{2}\theta )\). Then,

$$\displaystyle\begin{array}{rcl} \frac{\partial L} {\partial (\cos \theta )}& =& 2(P-\lambda )\cos \theta + R\sin \theta + S = 0, {}\\ \frac{\partial L} {\partial (\sin \theta )}& =& 2(Q-\lambda )\sin \theta + R\cos \theta + T = 0, {}\\ \frac{\partial L} {\partial \lambda } & =& 1 -\cos ^{2}\theta -\sin ^{2}\theta = 0. {}\\ \end{array}$$

Solving the first two equations simultaneously and substituting the solutions in the third gives the expressions for \(\cos \theta\), \(\sin \theta\) and the quartic equation, respectively. To show this is a minimum of \(D_{\mathrm{pCWP}}^{2}\), consider the matrix of second derivatives,

$$\displaystyle\begin{array}{rcl} S^{{\ast}} = \left [\begin{array}{cc} \frac{\partial ^{2}L} {\partial (\cos ^{2}\theta )} & \frac{\partial ^{2}L} {\partial (\cos \theta )\partial (\sin \theta )} \\ \frac{\partial ^{2}L} {\partial (\cos \theta )\partial (\sin \theta )} & \frac{\partial ^{2}L} {\partial (\sin ^{2}\theta )} \end{array} \right ] = \left [\begin{array}{cc} 2(P-\lambda )& R\\ R &2(Q-\lambda ) \end{array} \right ].& & {}\\ \end{array}$$

Let \(\xi _{1} \geq \xi _{2}\) be the eigenvalues of S ∗. Then, \(\vert S^{{\ast}}-\xi _{i}I\vert = (\xi _{i} + 2\lambda )^{2} - 2(P + Q)(\xi _{i} + 2\lambda ) + 4PQ - R^{2}\), so \((\xi _{i} + 2\lambda ) = P + Q \pm \sqrt{(P - Q)^{2 } + R^{2}}\). Given \(\varSigma ^{-1}\) is positive definite, P > 0 and Q > 0, then \(\xi _{2}\) is strictly positive if \(P + Q - 2\lambda -\sqrt{(P - Q)^{2 } + R^{2}} > 0\) which is true if the constraint on \(\lambda\) is satisfied. \(\square \)

Proof of Result 2.3.

Let v = vec(μ) and \(\xi = \text{vec}(X\varGamma )\), then

$$\displaystyle\begin{array}{rcl} D_{\mathrm{CWP}}^{2}(X,\mu;\varSigma )& =& (v -\beta \xi -(I_{ m} \otimes 1_{k})\gamma )^{T}\varSigma ^{-1}(v -\beta \xi -(I_{ m} \otimes 1_{k})\gamma ) {}\\ & =& v^{T}\varSigma ^{-1}v - 2\beta \xi ^{T}\varSigma ^{-1}v - 2v^{T}\varSigma ^{-1}(I_{ m} \otimes 1_{k})\gamma +\beta ^{2}\xi ^{T}\varSigma ^{-1}\xi {}\\ & & +2\beta \xi ^{T}\varSigma ^{-1}(I_{ m} \otimes 1_{k})\gamma +\gamma ^{T}(I_{ m} \otimes 1_{k})^{T}\varSigma ^{-1}(I_{ m} \otimes 1_{k})\gamma. {}\\ \end{array}$$

This implies

$$\displaystyle\begin{array}{rcl} \frac{dD_{\mathrm{CWP}}^{2}} {d\gamma } & =& -2(I_{m} \otimes 1_{k})^{T}\varSigma ^{-1}v + 2\beta (I_{ m} \otimes 1_{k})^{T}\varSigma ^{-1}\xi {}\\ & & +2(I_{m} \otimes 1_{k})^{T}\varSigma ^{-1}(I_{ m} \otimes 1_{k})\gamma, {}\\ \frac{dD_{\mathrm{CWP}}^{2}} {d\beta } & =& -2\xi ^{T}\varSigma ^{-1}v + 2\beta \xi ^{T}\varSigma ^{-1}\xi + 2\xi ^{T}\varSigma ^{-1}(I_{ m} \otimes 1_{k})\gamma. {}\\ \end{array}$$

Therefore, the minimum is at the solution of

$$\displaystyle\begin{array}{rcl} B\left [\begin{array}{c} \gamma \\ \beta \end{array} \right ] = \left [\begin{array}{c} (I_{m} \otimes 1_{k})^{T}\varSigma ^{-1}\mathrm{vec}(\mu ) \\ \mathrm{vec}(X\varGamma )^{T}\varSigma ^{-1}\mathrm{vec}(\mu ) \end{array} \right ].& & {}\\ \end{array}$$

The matrix of second derivatives is clearly positive because \(\varSigma ^{-1}\) is positive definite. \(\square \)

Proof of Result 2.4.

Replacing \(\cos \theta\) with \(\beta \cos \theta\) and \(\sin \theta\) with \(\beta \sin \theta\) in the proof of Result 9.2.2 gives \(D_{\mathrm{CWP}}^{2}(X,\mu;\varSigma ) = C + P\beta ^{2}\cos ^{2}\theta + Q\beta ^{2}\sin ^{2}\theta + R\beta ^{2}\cos \theta \sin \theta + S\beta \cos \theta + T\beta \sin \theta\). Let \(\psi _{1} =\beta \cos \theta\) and \(\psi _{2} =\beta \sin \theta\), then

$$\displaystyle\begin{array}{rcl} \frac{dD_{\mathrm{CWP}}^{2}} {d\psi _{1}} = 2P\psi _{1} + R\psi _{2} + S,\qquad \frac{dD_{\mathrm{CWP}}^{2}} {d\psi _{2}} = 2Q\psi _{2} + R\psi _{1} + T.& & {}\\ \end{array}$$

Setting these expressions equal to zero and solving them simultaneously gives the required expressions for ψ 1 and ψ 2. Solving \(\psi _{1} =\beta \cos \theta\) and \(\psi _{2} =\beta \sin \theta\) subject to the constraint that \(\cos ^{2}\theta +\sin ^{2}\theta = 1\) gives the rotation and scale parameters. Given these, the translation is obtained by letting \(v = \text{vec}(\mu -\beta X\varGamma )\) in the proof of Result 9.2.1. \(\square \)

Proof of Result 2.5.

If \(\varSigma = I_{m} \otimes \varSigma _{k}\), then the similarity transformation estimates of Result 9.2.4 can be simplified. For the translation,

$$\displaystyle\begin{array}{rcl} \hat{\gamma }& =& [(I_{m} \otimes 1_{k})^{T}(I_{ m} \otimes \varSigma _{k})^{-1} {}\\ & & \times (I_{m} \otimes 1_{k})]^{-1}(I_{ m} \otimes 1_{k})^{T}(I_{ m} \otimes \varSigma _{k})^{-1}\text{vec}(\mu -\beta X\varGamma ) {}\\ & =& [I_{m} \otimes (1_{k}^{T}\varSigma _{ k}^{-1}1_{ k})]^{-1}[I_{ m} \otimes (1_{k}^{T}\varSigma _{ k}^{-1})]\text{vec}(\mu -\beta X\varGamma ) {}\\ & =& [I_{m} \otimes (1_{k}^{T}\varSigma _{ k}^{-1}1_{ k})^{-1}(1_{ k}^{T}\varSigma _{ k}^{-1})]\text{vec}(\mu -\beta X\varGamma ). {}\\ \end{array}$$

Therefore, \(\hat{\gamma }^{T} = (1_{k}^{T}\varSigma _{k}^{-1}1_{k})^{-1}1_{k}^{T}\varSigma _{k}^{-1}(\mu -\beta X\varGamma )\), which is zero given \(1_{k}^{T}\varSigma _{k}^{-1}X = 0 = 1_{k}^{T}\varSigma _{k}^{-1}\mu\). Referring to the notation of Result 9.2.2, if \(\varSigma = I_{m} \otimes \varSigma _{k}\) then \(A_{11} = A_{22} = (1_{k}^{T}\varSigma _{k}^{-1}1_{k})^{-1}1_{k}^{T}\varSigma _{k}^{-1}\) and \(A_{12} = A_{21} = 0_{k}^{T}\). Then, from Eq. (9.4), if X and μ are located such that \(1_{k}^{T}\varSigma _{k}^{-1}X = 0 = 1_{k}^{T}\varSigma _{k}^{-1}\mu\), then \(\alpha _{i} =\delta _{i} =\zeta _{i} = 0\), for i = 1, 2, and P, Q, R, S and T simplify to

$$\displaystyle\begin{array}{rcl} P = Q& =& X_{1}^{T}\varSigma _{ k}^{-1}X_{ 1} + X_{2}^{T}\varSigma _{ k}^{-1}X_{ 2}, {}\\ R& =& -2(X_{1}^{T}\varSigma _{ k}^{-1}X_{ 2} - X_{2}^{T}\varSigma _{ k}^{-1}X_{ 1}) = 0, {}\\ S& =& -2(X_{1}^{T}\varSigma _{ k}^{-1}\mu _{ 1} + X_{2}^{T}\varSigma _{ k}^{-1}\mu _{ 2}), {}\\ T& =& 2(X_{2}^{T}\varSigma _{ k}^{-1}\mu _{ 1} - X_{1}^{T}\varSigma _{ k}^{-1}\mu _{ 2}). {}\\ \end{array}$$

The minimising rotation and scaling can then be obtained and have been derived by Brignell [2]. \(\square \)

Proof of Result 3.1.

$$\displaystyle\begin{array}{rcl} G_{\mathrm{CWP}}(X_{1},\ldots,X_{n};\varSigma )& =&\sum _{ i=1}^{n}\text{vec}(R_{i}\varGamma )^{T}\varSigma ^{-1}\text{vec}(R_{i}\varGamma ), {}\\ & = & \sum _{i=1}^{n}\left [\begin{array}{c} R_{i1}\cos \theta - R_{i2}\sin \theta \\ R_{i2}\cos \theta + R_{i1}\sin \theta \end{array} \right ]^{T}\varSigma ^{-1}\left [\begin{array}{c} R_{i1}\cos \theta - R_{i2}\sin \theta \\ R_{i2}\cos \theta + R_{i1}\sin \theta \end{array} \right ] {}\\ & = & p\cos ^{2}\theta + q\sin ^{2}\theta + 2r\cos \theta \sin \theta {}\\ \frac{dG_{\mathrm{CWP}}} {d\theta } & =& (q - p)\sin 2\theta + 2r\cos 2\theta. \\ \end{array}$$

Therefore, the minimum of G CWP is when \(\theta\) is a solution of this last equation. \(\square \)

Proof of Result 3.2.

The log-likelihood of the multivariate normal model, \(\text{vec}(X_{i}) \sim N_{km}(\text{vec}(\mu ),\varSigma )\), where X i are shapes invariant under Euclidean similarity transformations, is

$$\displaystyle\begin{array}{rcl} \log L(X_{1},\ldots,X_{n};\mu,\varSigma ) = -\frac{n} {2} \log \vert 2\pi \varSigma \vert && {}\\ & &-\frac{1} {2}\sum _{i=1}^{n}\text{vec}(\beta _{ i}X_{i}\varGamma _{i} + 1_{k}\gamma _{i}^{T}-\mu )^{T}\varSigma ^{-1}\text{vec}(\beta _{ i}X_{i}\varGamma _{i} + 1_{k}\gamma _{i}^{T}-\mu ). {}\\ \end{array}$$

Therefore, the MLE of the mean shape is the solution of

$$\displaystyle\begin{array}{rcl} \frac{d\log L} {d\mu } & =& \sum _{i=1}^{n}\varSigma ^{-1}\text{vec}(\beta _{ i}X_{i}\varGamma _{i} + 1_{k}\gamma _{i}^{T}) - n\varSigma ^{-1}\mu = 0. {}\\ \end{array}$$

Hence, \(\hat{\mu }=\bar{ X} = \frac{1} {n}\sum _{i=1}^{n}(\beta _{ i}X_{i}\varGamma _{i} + 1_{k}\gamma _{i}^{T})\) and

$$\displaystyle\begin{array}{rcl} \log L& =& -\frac{n} {2} \log \vert 2\pi \varSigma \vert -\frac{1} {2}\inf _{\beta _{i},\varGamma _{i},\gamma _{i}}\sum _{i=1}^{n}\|\beta _{ i}X_{i}\varGamma _{i} + 1_{k}\gamma _{i}^{T} -\bar{ X}\|_{\varSigma }^{2} {}\\ & =& -\frac{n} {2} \log \vert 2\pi \varSigma \vert -\frac{1} {2}G_{\mathrm{CWP}}(X_{1},\ldots,X_{n};\varSigma ). {}\\ \end{array}$$

Therefore, minimising G CWP is equivalent to maximising \(L(X_{1},\ldots,X_{n};\mu,\varSigma )\). \(\square \)

Proof of Result 4.1.

Let the m columns of \((I_{m} \otimes 1_{k})\) be 1 j for j = 1, …, m and let γ ij be the jth element of the translation vector for shape X i , then the log of the likelihood, L, for the multivariate normal model can be written:

$$\displaystyle\begin{array}{rcl} \log L =& & -\frac{n} {2} \log \vert 2\pi \varSigma \vert -\frac{1} {2}\sum _{i=1}^{n}\left (\text{vec}(\beta _{ i}X_{i}\varGamma _{i}-\mu )^{T}\varSigma ^{-1}\text{vec}(\beta _{ i}X_{i}\varGamma _{i}-\mu )\right ) {}\\ & & -\frac{1} {2}\sum _{i=1}^{n}\left (-2\sum _{ j=1}^{m}\gamma _{ ij}1_{j}^{T}\varSigma ^{-1}\text{vec}(\beta _{ i}X_{i}\varGamma _{i}-\mu ) +\sum _{ j=1}^{m}\gamma _{ ij}^{2}1_{ j}^{T}\varSigma ^{-1}1_{ j}\right ). {}\\ \end{array}$$

Now, \(1_{j}^{T}\varSigma ^{-1} = \frac{\sigma _{j}^{-1}} {\sqrt{k}} 1_{j}^{T}1_{j}1_{j}^{T}\) as all the eigenvectors of \(\varSigma\) are orthogonal to 1 j except the one proportional to 1 j . Therefore,

$$\displaystyle\begin{array}{rcl} \log L& =& -\frac{n} {2} \log \vert 2\pi \varSigma \vert -\frac{1} {2}\sum _{i=1}^{n}\left (\text{vec}(\beta _{ i}X_{i}\varGamma _{i}-\mu )^{T}\varSigma ^{-1}\text{vec}(\beta _{ i}X_{i}\varGamma _{i}-\mu )\right ) {}\\ & & -\frac{1} {2}\sum _{i=1}^{n}\left (-2\sum _{ j=1}^{m} \frac{\gamma _{ij}} {\sigma _{j}\sqrt{k}}1_{j}^{T}1_{ j}1_{j}^{T}\text{vec}(\beta _{ i}X_{i}\varGamma _{i}-\mu ) +\sum _{ j=1}^{m} \frac{\gamma _{ij}^{2}} {\sigma _{j}\sqrt{k}}1_{j}^{T}1_{ j}1_{j}^{T}1_{ j}\right ). {}\\ \end{array}$$

Given X i and μ are all centred, \(1_{j}^{T}\text{vec}(\beta _{i}X_{i}\varGamma _{i}-\mu ) = 0\), and the maximizing translation is clearly γ ij  = 0 for all i = 1, …, n and j = 1, …, m. \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Brignell, C.J., Dryden, I.L., Browne, W.J. (2016). Covariance Weighted Procrustes Analysis. In: Turaga, P., Srivastava, A. (eds) Riemannian Computing in Computer Vision. Springer, Cham. https://doi.org/10.1007/978-3-319-22957-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-22957-7_9

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-22956-0

  • Online ISBN: 978-3-319-22957-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics