Skip to main content

An Interpretable Orthogonal Decomposition of Positive Square Matrices

  • Chapter
  • First Online:
Advances in Compositional Data Analysis

Abstract

This study of square matrices with positive entries is motivated by a previous contribution on exchange rates matrices. The sample space of these matrices is endowed with a group operation, the componentwise product or Hadamard product. Also an inner product, identified with the ordinary inner product of the componentwise logarithm of the matrices, completes the sample space to be a Euclidean space. This situation allows to introduce two orthogonal decompositions: the first one inspired on the independence of probability tables, and the second related to the reciprocal symmetry matrices whose transpose is the componentwise inverse. The combination of them results in an orthogonal decomposition into easily computable four parts. The merit of this decomposition is that, applied to exchange rate matrices, the four matrices of the decomposition admit an intuitive interpretation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • C. Barceló-Vidal, J.A. Martín-Fernández, The mathematics of compositional analysis. Austrian J. Stat. 45, 57–71 (2016)

    Article  Google Scholar 

  • D. Billheimer, P. Guttorp, W. Fagan, Statistical interpretation of species composition. J. Amer. Stat. Assoc. 96(456), 1205–1214 (2001)

    Article  MathSciNet  Google Scholar 

  • J.J. Egozcue, V. Pawlowsky-Glahn, A compositional approach to contingency tables, in Proceedings of the Ninth International Conference on Computer Data Analysis and Modelling, vol. 1 (Publishing center of BSU, Minsk, 2010), pp. 101–107

    Google Scholar 

  • J.J. Egozcue, V. Pawlowsky-Glahn, Compositional data: the sample space and its structure (with discussion). TEST 28(3), 599–638 (2019). https://doi.org/10.1007/s11749-019-00670-6

  • J.J. Egozcue, V. Pawlowsky-Glahn, G. Mateu-Figueras, C. Barceló-Vidal, Isometric logratio transformations for compositional data analysis. Math. Geol. 35(3), 279–300 (2003)

    Google Scholar 

  • J.J. Egozcue, V. Pawlowsky-Glahn, M. Templ, K. Hron, Independence in contingency tables using simplicial geometry. Commun. Stats. A-Theor. 44(18), 3978–3996 (2015)

    Article  MathSciNet  Google Scholar 

  • K. Fačevicová, K. Hron, V. Todorov, M. Templ, General approach to coordinate representation of compositional tables. Scand. J. Stat. 43(4), 962–977 (2016)

    Article  MathSciNet  Google Scholar 

  • R.A. Horn, C.R. Johnson, Matrix Analysis (Reprint 1988 ed.) (Cambridge University Press, New York, 1985), 561 p

    Google Scholar 

  • W.W. Leontief, Input-Output Economics (Oxford University Press, New York, 1966), p. 359

    Google Scholar 

  • W.L. Maldonado, J.J. Egozcue, V. Pawlowsky-Glahn, No-arbitrage matrices of exchange rates: some characterizations. Int. J. Econ. Theory (2019). https://doi.org/10.1111/ijet.12249

    Article  Google Scholar 

  • M.I. Ortego, J.J. Egozcue, Bayesian estimation of the orthogonal decomposition of a contingency table. Austrian J. Stat. 45(4), 45–56 (2016)

    Article  Google Scholar 

  • V. Pawlowsky-Glahn, J.J. Egozcue, D. Lovell, Tools for compositional data with a total. Stat. Model 15(2), 175–190 (2015a)

    Google Scholar 

  • V. Pawlowsky-Glahn, J.J. Egozcue, R. Tolosana-Delgado, Modeling and Analysis of Compositional Data, Statistics in Practice (Wiley, Chichester, 2015b), p. 272

    Google Scholar 

  • V. Pawlowsky-Glahn, J.J. Egozcue, Geometric approach to statistical analysis on the simplex. Stoch. Env. Res. Risk A 15(5), 384–398 (2001)

    Article  Google Scholar 

  • V. Pawlowsky-Glahn, J.J. Egozcue, M. Planes-Pedra, Survey data on perceptions of contraceptive measures as compositional tables. Rev. Lat. Amer. Psicol. 50(3), 179–186 (2019)

    Google Scholar 

  • B. Peterson, M. Olinick, Markov chains, substochastic matrices and positive solutions of matrix equations. Math. Modell. 3, 221–239 (1982)

    Article  MathSciNet  Google Scholar 

  • V. Todorov, K. Fačevicová, K. Hron, D. Guo, M. Templ, Statistical analysis of compositional 2x2 tables with an economic application, in Proceedings of the 5th Workshop on Compositional Data Analysis, CoDaWork 2013, ISBN: 978-3-200-03103-6 (2013), pp. 123–130

    Google Scholar 

Download references

Acknowledgements

Research by J.J. Egozcue was supported by the project Ministerio de Ciencia, Innovación y Universidades (Spain) (Ref: RTI2018-095518-B-C22, 2019-2021).

Wilfredo L. Maldonado would like to thank the CNPq of Brazil for financial support 306473/2018-6 and FAPDF for financial support 00193-00001833/2019-50.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. J. Egozcue .

Editor information

Editors and Affiliations

Appendix: Proofs of Theorems and Properties

Appendix: Proofs of Theorems and Properties

Theorem 2

(Subspace of independent matrices) The set \(\mathbb {P}_\mathrm {ind}^D\) of square matrices with positive entries is a 2n-dimensional subspace of \(\mathbb {P}^D\).

Proof

For any A and B in \(\mathbb {P}_\mathrm {ind}^D\) and \(\alpha \in \mathbb {R}\) it should be shown that \(C=(\alpha \odot A)\oplus B\) is in \(\mathbb {P}_\mathrm {ind}^D\), that is, \(\mathbb {P}_\mathrm {ind}^D\) is closed under the vector space operations \(\oplus \), \(\odot \). Let us assume that \(A=U_c\oplus U_r^\top \) and \(B=W_c\oplus W_r^\top \). Then

$$ C = (U_c^\alpha ) \oplus (U_r^\alpha )^\top \oplus W_r^\top \oplus W_c \ , $$

where the \(\alpha \) powers operate componentwise and maintain positiveness as well as the product operations indicated by \(\oplus \). Since \(\oplus \) is commutative, reordering of terms yields

$$ C = [(U_c^\alpha )\oplus W_c ] \oplus [(U_r^\alpha )\oplus W_r ]^\top \ , $$

where each of two terms in brackets have all columns and rows, respectively, equal, thus showing that \(C\in \mathbb {P}_\mathrm {ind}^D\). The space of marginal column matrices like \(U_c\) is n; they are generated by vectors \(\mathbf {u}_c\) or their logarithmic coordinates \(\tilde{\mathbf {u}}_c=\log \mathbf {u}_c\) which form an n-dimensional subspace of \(\mathbb {R}^n\). The same is valid for \(U_r\). Since \(U_c\) and \(U_r^\top \) are linearly independent in \(\mathbb {P}^D\), the generated space of matrices in \(\mathbb {P}_\mathrm {ind}^D\) is 2n.     \(\square \)

Theorem 3

(Closest independent positive matrix) Let A be a matrix with positive entries, \(A\in \mathbb {P}^D\), \(D=n^2\). The orthogonal projection of A onto the subspace of independent matrices \(\mathbb {P}^D_\mathrm {ind}\) is \(A_\mathrm {ind}\) such that its components are

$$ [A_\mathrm {ind}]_{ij} = \frac{\mathrm {g_m}(\mathbf {a}^j)\mathrm {g_m}(\mathbf {a}_i)}{\mathrm {g_m}(A)} \ , $$

where \(\mathbf {a}^j\), \(\mathbf {a}_i\) are the columns and rows of A, respectively.

Proof

This proof follows the way used in Maldonado et al. (2019). The statement is equivalent to find \(A_\mathrm {ind}\in \mathbb {P}_\mathrm {ind}^D\) that minimizes \(\mathrm {d}_+^2(A,A_\mathrm {ind})\). In order to carry out such a minimization, let \(\mathbf {u}_c=(u_{c1},u_{c2},\dots ,u_{cn})^\top \), \(\mathbf {u}_r=(u_{r1},u_{r2},\dots ,u_{rn})^\top \) vectors with positive entries such that \(A_\mathrm {ind}=\mathbf {u}_c \mathbf {u}_r^\top \). Then, they minimize

$$\begin{aligned} \mathrm {d}_+^2(A,A_\mathrm {ind}) = \Vert \log A - \log A_\mathrm {ind}\Vert ^2= \sum _{ij} [ \log a_{ij} - (\log u_{ci} + \log u_{rj})]^2 \ . \end{aligned}$$
(5)

Taking derivatives with respect to \(u_{ci}\) and \(u_{rj}\), \(i,j=1,2,\dots ,n\) and equating to zero, we find

$$\begin{aligned} \sum _{i=1}^n [\log a_{ij} - (\log u_{ci} +\log u_{rj})]=0 \ , \quad \sum _{j=1}^n [\log a_{ij} - (\log u_{ci} +\log u_{rj})]=0 \ . \end{aligned}$$
(6)

Rearranging these 2n equations and taking exponentials yields

$$ u_{ci} = \frac{\mathrm {g_m}(\mathbf {a}_i)}{\mathrm {g_m}(\mathbf {u}_r)}\ , \quad u_{rj} = \frac{\mathrm {g_m}(\mathbf {a}^j)}{\mathrm {g_m}(\mathbf {u}_c)}\ , $$

where \(\mathrm {g_m}\) denotes the geometric mean over the arguments; \(\mathbf {a}_i\), \(\mathbf {a}^j\) the rows and columns of A, respectively. Moreover, adding all equations in (6), we conclude that \(\mathrm {g_m}(\mathbf {u}_c) \mathrm {g_m}(\mathbf {u}_r)= \mathrm {g_m}(A)\). Combining the two results,

$$ [A_\mathrm {ind}]_{ij} = u_{ci}\cdot u_{rj} = \frac{\mathrm {g_m}(\mathbf {a}^j)\mathrm {g_m}(\mathbf {a}_i)}{\mathrm {g_m}(A)} \ , $$

what proves the statement. Note that the factorization \(A_\mathrm {ind}=\mathbf {u}_c \mathbf {u}_r^\top \) is not unique.     \(\square \)

Property 2

Let \(A\in \mathbb {P}^D\) be a positive matrix with orthogonal decomposition \(A=A_\mathrm {ind}\oplus A_\mathrm {int}\) with \(A_\mathrm {ind}\in \mathbb {P}_\mathrm {ind}^D\), \(A_\mathrm {int}\in \mathbb {P}_\mathrm {int}^D\). Then, the rank of \(A_\mathrm {ind}\) is one and the rank of \(A_\mathrm {int}\) is equal to the rank of A, that is \(\mathrm {rank}(A_\mathrm {ind})=1\) and \(\mathrm {rank}(A_\mathrm {int})=\mathrm {rank}(A)\).

Proof

The rank of \(A_\mathrm {ind}\) is one since it is the outer product of two vectors, \(A_\mathrm {ind}= \mathbf {u}_c \mathbf {u}_r^\top \) due to Definition 2. Let \(\mathbf {a}^j\), \(j=1,2,\dots ,n\), be the columns of \(A_\mathrm {int}\). If there is a linear combination \(\sum _j \alpha _j \mathbf {a}^j =\boldsymbol{0}\) for some non null real \(\alpha _i\)’s, after multiplication of \(\mathbf {a}^j\) and \(\mathbf {u}_r\) componentwise, the ith-row is \(\sum _j \alpha _j a_i^j u_{ri}=0\) thus maintaining the combination null. The existence of the null linear combination of columns implies there exist another null linear combination of rows, which remains null after multiplication componentwise by \(\mathbf {u}_c\). This means that after the matrix perturbation \(A=A_\mathrm {ind}\oplus A_\mathrm {int}\), the possible null linear combinations of rows or columns remain as they were, null or non null; therefore, the rank is maintained after perturbation by \(A_\mathrm {ind}\). This property is a strong version of the inequality \(\mathrm {rank}(A_\mathrm {ind}\oplus A_\mathrm {int})\le \mathrm {rank}(A_\mathrm {ind})\cdot \mathrm {rank}(A_\mathrm {int})\) (Horn and Johnson 1985), in which \(\mathrm {rank}(A_\mathrm {ind})=1\), and the equal sign is valid.     \(\square \)

Property 3

Let \(A=A_\mathrm {ind}\oplus A_\mathrm {int}\) be the orthogonal decompositions of A and denote by \(\Vert \cdot \Vert _+\) the norm in \(\mathbb {P}^D\). Then,

(a) The decomposition is unique;

(b) \(\Vert A\Vert _+^2=\Vert A_\mathrm {ind}\Vert _+^2 + \Vert A_\mathrm {int}\Vert _+^2\).

Proof

The uniqueness of orthogonal decomposition into a subspace and its orthogonal complement and the Pythagorean theorem are standard properties of Hilbert spaces and, particularly, Euclidean spaces.     \(\square \)

Property 4

(Logarithmic coordinates in \(\mathbb {P}_\mathrm {ind}^D\) and \(\mathbb {P}_\mathrm {int}^D\)) Let \(A_\mathrm {ind}\in \mathbb {P}_\mathrm {ind}^D\) and \(A_\mathrm {int}\in \mathbb {P}_\mathrm {int}^D\) such that \(A=A_\mathrm {ind}\oplus A_\mathrm {int}\). Then, the respective logarithmic coordinates \(\tilde{A}_\mathrm {ind}=\log (A_\mathrm {ind})\), \(\tilde{A}_\mathrm {int}=\log (A_\mathrm {int})\) satisfy the following:

(a) \(\tilde{A}_\mathrm {ind}\) is obtained from the average coordinates of rows and columns (marginals) of \(\tilde{A}\),

$$\begin{aligned} \tilde{A}_\mathrm {ind}= & {} \tilde{\mathbf {u}}_c \boldsymbol{1}^\top + \boldsymbol{1}\tilde{\mathbf {u}}_r^\top \ , \\ \tilde{\mathbf {u}}_r= & {} \frac{1}{n} \tilde{A}^\top \boldsymbol{1} - \frac{1}{2n^2}\boldsymbol{1}\boldsymbol{1}^\top \tilde{A} \boldsymbol{1}\ ,\\ \tilde{\mathbf {u}}_c= & {} \frac{1}{n} \tilde{A} \boldsymbol{1} - \frac{1}{2n^2}\boldsymbol{1}\boldsymbol{1}^\top \tilde{A} \boldsymbol{1} \ . \\ \end{aligned}$$

(b) The (arithmetic) marginals of \(\tilde{A}_\mathrm {int}\) are null,

$$ \frac{1}{n}\tilde{A_\mathrm {int}}\boldsymbol{1}=\boldsymbol{0}\ , \quad \frac{1}{n}\tilde{A_\mathrm {int}}^\top \boldsymbol{1}=\boldsymbol{0}\ . $$

(c) The logarithmic coordinates of the interaction part are obtained by double centering of \(\tilde{A}\),

$$ \tilde{A}_\mathrm {int}= \left( I_n-\frac{1}{n}\boldsymbol{1}\boldsymbol{1}^\top \right) \ \tilde{A} \ \left( I_n-\frac{1}{n}\boldsymbol{1}\boldsymbol{1}^\top \right) \ , $$

where \(I_n\) denotes the identity matrix of n rows.

Proof

(a) \(A\in \mathbb {P}_\mathrm {ind}^D\), \(A = (\mathbf {u}_c\boldsymbol{1}^\top )\oplus (\mathbf {u}_r^\top \boldsymbol{1})\) due to Definition 2. Taking \(\log \) componentwise, \(\log (A)=\tilde{A}\), products in \(\oplus \) become sums and \(\log (\mathbf {u}_c\boldsymbol{1}^\top ) = \tilde{\mathbf {u}}_c\boldsymbol{1}^\top \) and \(\log (\mathbf {u}_r\boldsymbol{1}^\top ) = \tilde{\mathbf {u}}_r\boldsymbol{1}^\top \). The expressions of \(\tilde{\mathbf {u}}_r\) and \(\tilde{\mathbf {u}}_c\) are obtained from Theorem 3 and taking logarithms.

(b) From decomposition \(A=A_\mathrm {ind}\oplus A_\mathrm {int}\), taking logarithmic coordinates \(\tilde{A}_\mathrm {int}=\tilde{A} - \tilde{A}_\mathrm {ind}\). Computing the arithmetic marginals of both sides of the latter equality and taking into account the expressions of \(\mathbf {u}_c\), \(\mathbf {u}_r\) in (a), the desired result holds. Note that this means that the arithmetic marginals of \(\tilde{A}\) and \(\tilde{A}_\mathrm {ind}\) are equal.

(c) The decomposition \(A=A_\mathrm {ind}\oplus A_\mathrm {int}\) in logarithmic coordinates implies \(\tilde{A}_\mathrm {int}= \tilde{A} - \tilde{A}_\mathrm {ind}\). Substituting the expressions of \(\tilde{A}_\mathrm {ind}\), \(\tilde{\mathbf {u}}_c\) and \(\tilde{\mathbf {u}}_r\) given in part (a) of this property, and after some tedious computation, the double centering expression is obtained.     \(\square \)

Theorem 4

(Subspace of reciprocal matrices) The set of reciprocal matrices in \(\mathbb {P}^D\) is a \((n(n-1)/2)\)-dimensional subspace of \(\mathbb {P}^D\), denoted \(\mathbb {P}_\mathrm {rec}^D\).

Proof

The statement is equivalent to prove that anti-symmetric matrices are a \((n(n-1)/2)\)-dimensional subspace of square matrices equipped with the ordinary sum of matrices and the multiplication by real scalars. This is, if \(\tilde{A}\) and \(\tilde{B}\) are anti-symmetric and \(\alpha \in \mathbb {R}\) then \(\tilde{C}=\alpha \tilde{A}+\tilde{B}\) is also anti-symmetric. In fact, for any entry of \(\tilde{C}\), \(\tilde{c}_{ij}= \alpha \cdot \tilde{a}_{ij} + \tilde{b}_{ij}\). Using the anti-symmetry properties of \(\tilde{A}\) and \(\tilde{B}\), it yields

$$ \tilde{c}_{ij}=-\alpha \cdot \tilde{a}_{ji} - \tilde{b}_{ji} = -\tilde{c}_{ji} \ ,\quad \tilde{C} = -\tilde{C}^\top \ . $$

Therefore, \(\tilde{C}\) is anti-symmetric and \(C\in \mathbb {P}_\mathrm {rec}^D\). The dimension of the subspace is equal to the number of free logarithmic coordinates, for instance those in the lower triangle of the matrix, excluding the diagonal, that is \(n(n-1)/2\).     \(\square \)

Theorem 5

(orthogonal decomposition) The space of square positive matrices \(\mathbb {P}^D\), \(D=n^2\), is uniquely decomposed into the four mutually orthogonal subspaces \(\mathbb {P}_\mathrm {rind}^D\), \(\mathbb {P}_\mathrm {rint}^D\), \(\mathbb {P}_\mathrm {sind}^D\), \(\mathbb {P}_\mathrm {sint}^D\) whose dimensions are \(\mathrm {dim}(\mathbb {P}_\mathrm {rind}^D)=n\), \(\mathrm {dim}(\mathbb {P}_\mathrm {sind}^D)=n\), \(\mathrm {dim}(\mathbb {P}_\mathrm {rint}^D)=n(n-3)/2\), \(\mathrm {dim}(\mathbb {P}_\mathrm {sint}^D)=n(n-1)/2\). Any matrix \(A\in \mathbb {P}^D\) is expressed in a unique way as

$$\begin{aligned} A=A_\mathrm {rind}\oplus A_\mathrm {rint}\oplus A_\mathrm {sind}\oplus A_\mathrm {sint}\ . \end{aligned}$$
(7)

In logarithmic coordinates, the decomposition of \(A\in \mathbb {P}^D\) is

$$ \tilde{A}_\mathrm {rec}=\frac{1}{2}(\tilde{A} - \tilde{A}^\top )\ ,\quad \tilde{A}_\mathrm {sym}=\frac{1}{2}(\tilde{A} + \tilde{A}^\top )\ $$

and

$$\begin{aligned} \tilde{A}_\mathrm {rint}= & {} \left( I_n-\frac{1}{n}\boldsymbol{1}\boldsymbol{1}^\top \right) \ \tilde{A}_\mathrm {rec}\left( I_n-\frac{1}{n}\boldsymbol{1}\boldsymbol{1}^\top \right) \ , \\ \nonumber \tilde{A}_\mathrm {sint}= & {} \left( I_n-\frac{1}{n}\boldsymbol{1}\boldsymbol{1}^\top \right) \ \tilde{A}_\mathrm {sym}\left( I_n-\frac{1}{n}\boldsymbol{1}\boldsymbol{1}^\top \right) \ , \\ \nonumber \tilde{A}_\mathrm {rind}= & {} \tilde{A}_\mathrm {rec}- \tilde{A}_\mathrm {rint}\ , \\ \nonumber \tilde{A}_\mathrm {sind}= & {} \tilde{A}_\mathrm {sym}- \tilde{A}_\mathrm {sint}\ . \end{aligned}$$
(8)

Proof

The orthogonality of the four subspaces is a consequence of the two orthogonal decompositions. Consider \(A\in \mathbb {P}^D\) and obtain its decomposition as in Eq. (3). Take \(A_\mathrm {rind}\); it is orthogonal to \(A_\mathrm {rint}\) due to orthogonal decomposition in Corollary 1. Additionally, it is orthogonal to \(A_\mathrm {sind}\) and \(A_\mathrm {sint}\) due to orthogonality with the subspace \(\mathbb {P}_\mathrm {sym}^D\). Similarly, this reasoning can be repeated for the four components of the decomposition. The uniqueness statement comes from the uniqueness of orthogonal decompositions.

The dimension of the spaces can be studied using logarithmic coordinates. The \(\mathbb {P}_\mathrm {rec}^D \cup \mathbb {P}_\mathrm {sym}^D=\mathbb {P}^D\) whose dimension is \(D=n^2\). Logarithmic (orthogonal) coordinates that determine \(\mathbb {P}_\mathrm {rec}^D\) can be those in the lower triangle, excluding the diagonal, as they determine the upper triangle, which are the anti-symmetric values, and the coordinates in the diagonal are null. Therefore, the dimension of \(\mathbb {P}_\mathrm {rec}^D\) is \(n(n-1)/2\). Since matrices in \(\mathbb {P}_\mathrm {rind}^D\) can be constructed from a single n-vector, the dimension is n and the dimension of \(\mathbb {P}_\mathrm {rint}^D\) is \(n(n-1)/2-n\). The dimension of \(\mathbb {P}_\mathrm {sym}^D\) is \(n(n+1)/2\) since the logarithmic coordinates needed to reconstruct the symmetric matrix is the lower triangle including the diagonal. The dimension of \(\mathbb {P}_\mathrm {sind}^D\) is n as these matrices are represented as two equal n-vectors and its marginals. Then, the dimension of \(\mathbb {P}_\mathrm {sint}^D\) is \(n(n+1)/2-n\).

The fact that logarithmic coordinates of \(A_\mathrm {rec}\in \mathbb {P}^D_\mathrm {rec}\) are anti-symmetric leads to decompose \(\tilde{A}\) into symmetric and anti-symmetric components, that is a unique decomposition, and \(\tilde{A}_\mathrm {rec}\), \(\tilde{A}_\mathrm {sym}\) are identified to the anti-symmetric and symmetric parts, respectively. As separation of interaction components corresponds to a double centering, \(\tilde{A}_\mathrm {rint}\), \(\tilde{A}_\mathrm {sint}\) are so obtained and the subtracted matrices are \(\tilde{A}_\mathrm {rind}\), \(\tilde{A}_\mathrm {sind}\).     \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Egozcue, J.J., Maldonado, W.L. (2021). An Interpretable Orthogonal Decomposition of Positive Square Matrices. In: Filzmoser, P., Hron, K., Martín-Fernández, J.A., Palarea-Albaladejo, J. (eds) Advances in Compositional Data Analysis. Springer, Cham. https://doi.org/10.1007/978-3-030-71175-7_1

Download citation

Publish with us

Policies and ethics