Abstract
In this paper, we discuss the embedding problem for centrosymmetric matrices, which are higher order generalizations of the matrices occurring in strand symmetric models. These models capture the substitution symmetries arising from the double helix structure of the DNA. Deciding whether a transition matrix is embeddable or not enables us to know if the observed substitution probabilities are consistent with a homogeneous continuous time substitution model, such as the Kimura models, the JukesCantor model or the general timereversible model. On the other hand, the generalization to higher order matrices is motivated by the setting of synthetic biology, which works with different sizes of genetic alphabets.
1 Introduction
Phylogenetics is the study of evolutionary relationships among biological entities, also known as taxa, that aims to infer the evolutionary history among them. In order to model evolution, we consider a directed acyclic graph, called a phylogenetic tree, depicting the evolutionary relationships amongst a selected set of taxa. Phylogenetic trees consist of vertices and edges. Vertices represent taxa, while edges between vertices represent the evolutionary processes between the taxa.
In order to describe the real evolutionary process along an edge of a phylogenetic tree, one often assumes that the evolutionary data occurred following a Markov process. A Markov process is a random process in which the future is independent of the past, given the present. Under this Markov process, transitions between n states given by conditional probabilities are presented in a \(n\times n\) Markov matrix M, that is a square matrix whose entries are nonnegative and rows sum to one. A wellknown problem in probability theory is the socalled embedding problem which was initially posed by Elfving (Elfving 1937). The embedding problem asks whether given a Markov matrix M, one can find a real square matrix Q with rows summing to zero and nonnegative offdiagonal entries, such that \(M=\exp (Q)\). The matrix Q is called a Markov generator.
In the complex setting, the embedding problem is completely solved by Higham (2008); a complex matrix A is embeddable if and only if A is invertible. However, as our motivation arises from molecular models of evolution we are interested in the embedding problem over the real numbers, so from now on we will denote by M a real Markov matrix. It was shown by Kingman (1962) that if an \(n \times n\) real Markov matrix M is embeddable, then the matrix M has \(\det {M}>0\). Moreover, in the same work by Kingman it was shown that \(\det {M}>0\) is a necessary and sufficient condition for a \(2\times 2\) Markov matrix M to be embeddable. For \(3\times 3\) Markov matrices a complete solution of the embedding problem is provided in a series of papers (James 1973; Johansen 1974; Carette 1995; Chen and Chen 2011), where the characterisation of embeddable matrices depends on the Jordan decomposition of the Markov matrix. For \(4 \times 4\) Markov matrices the embedding problem is completely settled in a series of papers (Casanellas et al. 2020a, 2023; RocaLacostena and FernándezSánchez 2018a), where similarly to the \(3\times 3\) case the full characterisation of embeddable matrices is distinguished into cases depending on the Jordan form of the Markov matrices.
For the general case of \(n \times n\) Markov matrices, there are several results; some presenting necessary conditions (Elfving 1937; Kingman 1962; Runnenberg 1962), while others sufficient conditions (James 1973; Fuglede 1988; Goodman 1970; Davies et al. 2010) for embeddability of Markov matrices. Moreover, the embedding problem has been solved for special \(n\times n\) matrices with a biological interest such as equalinput and circulant matrices (Baake and Sumner 2020), groupbased models (Ardiyansyah et al. 2021) and timereversible models (Jia 2016). Despite the fact that there is no theoretical explicit solution for the embeddability of general \(n\times n\) Markov matrices, there are results (Casanellas et al. 2023) that enable us to decide whether a \(n\times n\) Markov matrix with distinct eigenvalues is embeddable or not. This is achieved by providing an algorithm that outputs all Markov generators of such a Markov matrix (Casanellas et al. 2023; RocaLacostena 2021).
In this paper, we focus on the embedding problem for \(n \times n\) matrices that are symmetric about their center and are called centrosymmetric matrices (see Definition 2). We also study a variation of the famous embedding problem called model embeddability, where apart from the requirement that the Markov matrix is the matrix exponential of a rate matrix, we additionally ask that the rate matrix follows the model structure. For instance, for centrosymmetric matrices, model embeddability means that the rate matrix is also centrosymmetric.
The motivation for studying centrosymmetric matrices comes from evolutionary biology, as the most general nucleotide substitution model when considering both DNA strands admits any \(n\times n\) centrosymmetric Markov matrix as a transition matrix, where n is the even number of nucleotides. For instance, by considering the four natural nucleotides AT, CG we arrive at the strand symmetric Markov model, a wellknown phylogenetic model whose substitution probabilities reflect the symmetry arising from the complementarity between the two strands that the DNA is composed of (see (Casanellas and Sullivant 2005)). In particular, a strand symmetric model for DNA must have the following equalities of probabilities in the root distribution:
and the following equalities of probabilities in the transition matrices \((\theta _{ij})\)
Therefore, the corresponding transition matrices of this model are \(4\times 4\) centrosymmetric matrices, usually called strand symmetric Markov matrices in this context. In this article, we will use the terminology \(4\times 4\) centrosymmetric Markov matrix and strand symmetric Markov matrix interchangeably. In the strand symmetric model there are less restrictions on the way genes mutate from ancestor to child compared to other widely known molecular models of evolution. In fact, special cases of the strand symmetric model are the groupbased phylogenetic models such as the JukesCantor (JC) model, the Kimura 2parameter (K2P) and Kimura 3parameter (K3P) models. The algebraic structure of strand symmetric models was initially studied in (Casanellas and Sullivant 2005), where it was argued that strand symmetric models capture more biologically meaningful features of real DNA sequences than the commonly used groupbased models, as for instance, in any groupbased model, the stationary distribution of bases for a single species is always the uniform distribution, while computational evidence in (Yap and Pachter 2004) suggests that the stationary distribution of bases for a single species is rarely uniform, but must always satisfy the symmetries (1.1) arising from nucleotide complementarity, as assumed by the strand symmetric model.
In this article, we also explore higher order centrosymmetric matrices for which \(n > 4\), which is justified by the use of synthetic nucleotides. One of main goals of synthetic biology is to expand the genetic alphabet to include an unnatural or synthetic base pair. The more letters in a genetic system could possibly lead to an increased potential for retrievable information storage and barcoding and combinatorial tagging (Benner and Sismour 2005). Naturally the fourletter genetic alphabet consists of just two pairs, AT and GC. In 2012, a genetic system comprising of three base pairs was introduced in (Malyshev et al. 2012). In addition to the natural base pairs, the third, unnatural or synthetic base pair 5SICSMMO2 was proven to be functionally equivalent to a natural base pair. Moreover, when it is combined with the natural base pairs, 5SICSMMO2 provides a fully functional sixletter genetic alphabet. Namely, sixletter genetic alphabets can be copied (Yang et al. 2007), polymerase chain reaction (PCR)amplified and sequenced (Sismour et al. 2004; Yang et al. 2011), transcribed to sixletter RNA and back to sixletter DNA (Leal et al. 2015), and used to encode proteins with added amino acids (Bain et al. 1992). This biological importance and relevance of the above sixletter genetic alphabets motivates us to particularly study the \(6\times 6\) Markov matrices describing the probabilities of changing base pairs in the sixletter genetic system in Sect. 6. When considering both DNA strands, each substitution is observed twice due to the complementarity between both strands, and hence the resulting transition matrix is centrosymmetric.
Moreover there are other synthetic analogs to natural DNA which justify studying centrosymmetric matrices for \(n>6\). For instance, hachimoji DNA is a synthetic DNA that uses four synthetic nucleotides B,Z,P,S in addition to the four natural ones A,C,G,T. With the additional four synthetic ones, hachimoji DNA forms four types of base pairs, two of which are unnatural: P binds with Z and B binds with S. The complementarity between both strands of the DNA implies that the transition matrix is centrosymmetric. Moreover, the research group responsible for the hachimoji DNA system had also studied a synthetic DNA analog system that used twelve different nucleotides, including the four found in DNA (see Yang et al. 2006). Although the biological models which motivate the study of centrosymmetric matrices in this paper require n to be an even number due to the doublehelix structure of DNA, in Sect. 5, we include the case of n being odd for completeness.
Apart from embeddability, that is existence of Markov generators, it is also natural to ask about uniqueness of a Markov generator which is called the rate identifiability problem. Identifiability is a property which a model must satisfy in order for precise statistical inference to be possible. A class of phylogenetic models is identifiable if any two models in the class produce different data distributions. In this article, we further develop the results on rate identifiability of the Kimura two parameter model (Casanellas et al. 2020a) to study rate identifiability for strand symmetric models. We also show that there are embeddable strand symmetric Markov matrices with non identifiable rates, namely the Markov generator is not unique. Moreover, we show that strand symmetric Markov matrices are not generically identifiable, that is, there exists a positive measure subset of strand symmetric Markov matrices containing embeddable matrices whose rates are not identifiable.
This paper is organised as follows. In Sect. 2, we introduce the basic definitions and results on embeddability. In Sect. 3, we give a characterisation for a \(4 \times 4\) centrosymmetric Markov matrix M with four distinct real nonnegative eigenvalues to be embeddable providing necessary and sufficient conditions in Theorem 2, while we also discuss their rate identifiability property in Proposition 3. Moreover in Sect. 4, using the conditions of our main result Theorem 2, we compute the relative volume of all \(4 \times 4\) centrosymmetric Markov matrices relative to the \(4 \times 4\) centrosymmetric Markov matrices with positive eigenvalues and \(\Delta >0\), as well as the relative volume of all \(4 \times 4\) centrosymmetric Markov matrices relative to the \(4 \times 4\) centrosymmetric Markov matrices with four distinct eigenvalues and \(\Delta >0\). We also compare the results on relative volumes obtained using our method with the algorithm suggested in Casanellas et al. (2023) to showcase the advantages of our method. In Sect. 5, we study higher order centrosymmetric matrices and motivate their use in Sect. 6 by exploring the case of synthetic nucleotides where the phylogenetic models admit \(6 \times 6\) centrosymmetric mutation matrices. Finally, Sect. 7 discusses implications and possibilities for future work.
2 Preliminaries
In this section we will introduce the definitions and results that will be required throughout the paper. We will denote by \(M_n({\mathbb {K}})\) the set of \(n \times n\) square matrices with entries in the field \({\mathbb {K}}= {\mathbb {R}} \text { or } {\mathbb {C}}\). The subset of nonsingular matrices in \(M_n({\mathbb {K}})\) will be denoted by \(GL_{n}({\mathbb {K}})\).
Definition 1
A Markov (or transition) matrix is a nonnegative real square matrix with rows summing to one. A rate matrix is a real square matrix with rows summing to zero and nonnegative offdiagonal entries.
In this paper, we are focusing on a subset of Markov matrices called centrosymmetric Markov matrices.
Definition 2
A real \(n\times n\) matrix \(A=(a_{i,j})\) is said to be centrosymmetric (CS) if
for every \(1\le i,j\le n\).
Definition 2 reveals that a CS matrix is nothing more than a square matrix which is symmetric about its center. This class of matrices has been previously studied, for instance, in (Aitken 2017, page 124) and Weaver (1985). Examples of CS matrices for \(n=5\) and \(n=6\), are the following two matrices respectively:
The class of CS matrices plays an important role in the study of Markov processes since they are indeed transition matrices for some processes in evolutionary biology. For instance, in Kimura (1957), centrosymmetric matrices are used to study the random assortment phenomena of subunits in chromosome division. Furthermore, in Schensted (1958), the same centrosymmetric matrices appear as the transition matrices in the model of subnuclear segregation in the macronucleus of ciliates. Finally, the work (Iosifescu 2014) examines a special case of the random genetic drift phenomenon, which consists of a population of individuals that are able to produce a single type of gamete. In this case, the transition matrices of the associated Markov chain are given by centrosymmetric matrices.
The embedding problem is directly related to the notions of matrix exponential and logarithm which we introduce for completeness below.
Definition 3
We define the exponential \(\exp (A)\) of a matrix A, using the Taylor power series of the function \(f(x)=e^x\), as
where \(A^0= I_n\) and \(I_n\) denotes the \(n\times n\) identity matrix. If \(A=P \;diag(\lambda _1,\dots ,\lambda _n)\;P^{1}\) is an eigendecomposition of A, then \(\exp (A) = P \;diag(e^{\lambda _1},\dots ,e^{\lambda _n})\;P^{1}\). Given a matrix \(A\in M_n({\mathbb {K}})\), a matrix \(B\in M_n({\mathbb {K}})\) is said to be a logarithm of A if \(\exp (B)=A\). If v is an eigenvector corresponding to the eigenvalue \(\lambda \) of A, then v is an eigenvector corresponding to the eigenvalue \(e^{\lambda }\) of \(\exp (A)\).
A Markov matrix M is called embeddable if it can be written as the exponential of a rate matrix Q, namely \(M= \exp (Q)\). Then any rate matrix Q satisfying the equation \(M= \exp (Q)\) is called a Markov generator of M.
Remark 1
Embeddable Markov matrices occur when we assume a continuous time Markov chain, in which case the Markov matrices have the form
where \(t \ge 0\) represents time and Q is a rate matrix. However, in the rest of the paper, we assume that t is incorporated in the rate matrix Q.
The existence of multiple logarithms is a direct consequence of the distinct branches of the logarithmic function in the complex field.
Definition 4
Given \(z\in {\mathbb {C}}\setminus {\mathbb {R}}_{\le 0}\) and \(k\in {\mathbb {Z}}\), the kth branch of the logarithm of z is \( \log _k(z):= \log z + (Arg(z)+2\pi k)i\), where \(\log \) is the logarithmic function on the real field and \(Arg(z)\in (\pi ,\pi )\) denotes the principal argument of z. The logarithmic function arising from the branch \(\log _0(z)\) is called the principal logarithm of z and is denoted as \(\log (z)\).
It is known that if A is a matrix with no negative eigenvalues, then there is a unique logarithm of A all of whose eigenvalues are given by the principal logarithm of the eigenvalues of A (Higham 2008, Theorem 1.31). We refer to this unique logarithm as the principal logarithm of A, denoted by Log(A).
By definition, the Markov generators of a Markov matrix M are those logarithms of M that are rate matrices. In particular they are real logarithms of M. The following result enumerates all the real logarithms with rows summing to zero of any given Markov matrix with positive determinant and distinct eigenvalues. Therefore, all Markov generators of such a matrix are necessarily of this form.
Proposition 1
(Casanellas et al. 2023, Proposition 4.3). Let \(M=P\; diag \big ( 1,\lambda _1,\dots ,\lambda _t, \mu _1,\overline{\mu _1},\dots ,\mu _s,\overline{\mu _s} \big ) \; P^{1}\) be an \(n\times n\) Markov matrix with \(P\in GL_n({\mathbb {C}})\) and distinct eigenvalues \(\lambda _i\in {\mathbb {R}}_{>0}\) for \(i=1,\dots ,t\) and \(\mu _j \in \{z\in {\mathbb {C}}: Im(z) > 0\}\) for \(j=1,\dots ,s\), all of them pairwise distinct. Then, a matrix Q is a real logarithm of M with rows summing to zero if and only if \(Q=P\; diag \Big ( 0, \log (\lambda _1),\dots ,\log (\lambda _t), \log _{k_1}(\mu _1),\overline{\log _{k_1}(\mu _1)},\dots ,\log _{k_s}(\mu _s),\overline{\log _{k_s}(\mu _s)} \Big ) \; P^{1}\) for some \(k_1,\dots ,k_j\in {\mathbb {Z}}\).
Remark 2
In particular, the principal logarithm of M can be computed as
In this paper, we focus on the embedding problem for the class of centrosymmetric matrices. In Sect. 3, we will first study the embeddability of \(4\times 4\) centrosymmetric Markov matrices, which include the K3P, K2P and JC Markov matrices. In Sect. 5 and Sect. 6, we will further study the embeddability of higher order centrosymmetric Markov matrices.
3 Embeddability of \(4\times 4\) centrosymmetric matrices
In this section, we begin our study by analyzing the embeddability of \(4\times 4\) centrosymmetric matrices also known as strand symmetric matrices. We will provide necessary and sufficient conditions for \(4\times 4\) centrosymmetric matrices to be embeddable. Moreover, we will discuss their rate identifiability problem as well.
The transition matrices of \(4\times 4\) centrosymmetric matrices are assumed to have the form
where
Recall that the K3P matrices are assumed to have the form
In the case of the K2P matrices, we additionally have \(m_{12} = m_{13}\), while in the case of JC matrices, \(m_{12} = m_{13} = m_{14}\). It can be easily seen that K3P, K2P, and JC Markov (rate) matrices are centrosymmetric.
Let us define the following matrix
compare (Casanellas and Kedzierska 2013, Section 6). For a \(4\times 4\) CS Markov matrix M, we define \(F(M):=S^{1}M S\). By direct computation, it can be checked that F(M) is a block diagonal matrix
where
Define two matrices, \(M_1:=\begin{pmatrix}\lambda &{}1\lambda \\ 1\mu &{}\mu \\ \end{pmatrix}\) and \(M_2:=\begin{pmatrix}\alpha &{}\alpha '\\ \beta '&{}\beta \\ \end{pmatrix}\), which are the upper and lower block matrices in (3.2), respectively.
Similarly, the rate matrices in strand symmetric models are assumed to have the \(4 \times 4\) centrosymmetric form
where
So, for a \(4\times 4\) CS rate matrix Q, we can also define \(F(Q):=S^{1}Q S\). By direct computation, it can be checked that
where
Define two matrices, \(Q_1:=\begin{pmatrix}\rho &{}\rho \\ \sigma &{}\sigma \\ \end{pmatrix}\) and \(Q_2:=\begin{pmatrix}\delta &{}\delta '\\ \gamma '&{}\gamma \\ \end{pmatrix}\), which are the upper and lower block matrices in (3.4), respectively.
The following results provide necessary conditions for a \(4\times 4\) CS Markov matrix to be embeddable.
Lemma 1
Let \(M=(m_{ij})\) be a \(4\times 4\) CS Markov matrix and \(M=\exp (Q)\) for some CS rate matrix Q. Then

1.
\(m_{11}+m_{14}+m_{22}+m_{23}>1\) and

2.
\((m_{22}m_{23})(m_{11}m_{14})>(m_{24}m_{21})(m_{13}m_{12}).\)
Proof
We have that
Then
Thus, \(M_1\) is an embeddable \(2\times 2\) Markov matrix. Using the embeddability criteria of \(2\times 2\) Markov matrices in Kingman (1962), we have that \(1<tr(M_1)=\lambda +\mu \), which is the desired inequality. Additionally, since \(M_2=\exp (Q_2)\), \(det(M_2)>0\) as desired. \(\square \)
Lemma 2
Let \(M=(m_{ij})\) be a \(4\times 4\) CS Markov matrix and \(M=\exp (Q)\) for some CS rate matrix \(Q=(q_{ij})\). If \(\lambda +\mu \ne 2\), then
and
Proof
By direct computations and the proof of Lemma 1,
We then have the following system of equations:
Summing the two equations, we get
Note that by Lemma 1, \(\lambda +\mu >1.\) Therefore,
Using Equation (3.5) and (3.6), we obtain
The proof is now complete. \(\square \)
Proposition 2
Given two matrices \(A=(a_{ij}),B=(b_{ij}) \in M_2({\mathbb {R}})\), consider the blockdiagonal matrix \(C= diag(A,B)\). Then the following statements hold:

i)
\(F^{1}(C):= S C S^{1}\) is a CS matrix.

ii)
\(F^{1}(C)\) is a Markov matrix if and only if A is a Markov matrix and
$$\begin{aligned} b_{2 2}\le a_{1 1}, \qquad b_{2 1}\le a_{1 2}, \qquad b_{1 2}\le a_{ 2 1}, \qquad b_{1 1}\le a_{2 2}. \end{aligned}$$ 
iii)
\(F^{1}(C)\) is a rate matrix if and only if A is a rate matrix and
$$\begin{aligned} b_{2 2}\le a_{1 1}(\le 0), \quad b_{2 1}\le a_{1 2} (=a_{1 1}), \quad b_{1 2}\le a_{2 1}(=a_{2 2}), \quad b_{1 1}\le a_{2 2}(\le 0). \end{aligned}$$
Proof
To prove i), by direct computation we obtain that
Then ii) follows from the above expression of \(F^{1}(Q)\) and the fact that rows of Markov matrices add to 1 and the entries are nonnegative, while iii) similarly follows from the fact that the rows of rate matrices add to zero and the offdiagonal entries are nonnegative.\(\square \)
For any \(4\times 4\) CS Markov matrix \(M=(m_{ij})\), let us recall that by (3.2), M is blockdiagonalizable via the matrix S. In the rest of this section, we will study both the upper and the lower block matrices of F(M) more closely. Studying the upper and lower blocks allows us to establish the main result of the embeddability criteria for \(4\times 4\) CS Markov matrices. This blockdiagonalization reduces our analysis to studying the logarithms of both the upper and the lower block matrices which have size \(2\times 2\). This result will be presented in Theorem 2.
3.1 Upper block
As we have seen in (3.2), the upper block of F(M) is given by the \(2\times 2\) matrix \(M_1=\begin{pmatrix} \lambda &{} 1\lambda \\ 1\mu &{}\mu \\ \end{pmatrix}\), which is a Markov matrix. If \(P_1=\begin{pmatrix} 1&{}1\lambda \\ 1&{}\mu 1\\ \end{pmatrix}\), then
Hence, by Proposition 1, any logarithm of \(M_1\) can be written as
for some integers \(k_1\) and \(k_2\). Let \(p=\log (\lambda +\mu 1)\), \(q=1\lambda \), and \(r=1\mu \). Then
Lemma 3
If \(\lambda +\mu \ne 2\), then \(L^{M_1}_{k_1,k_2}\) is a real matrix if and only if \(k_1=k_2=0\) and \(\lambda +\mu >1\). In this case, the only real logarithm of \(M_1\) is the principal logarithm
Proof
For fixed \(k_1\) and \(k_2\), the eigenvalues of \(L^{M_1}_{k_1,k_2}\) are \(\lambda _1=2k_1\pi i\) and \(\lambda _2=p+2k_2\pi i.\) Then \(L^{M_1}_{k_1,k_2}\) is a real matrix if and only if \(\lambda _1,\lambda _2\in {\mathbb {R}}\) or \(\lambda _2=\overline{\lambda _1}\). Since \(\lambda +\mu \ne 2\), \(\lambda _2\ne \overline{\lambda _1}\). Thus, \(L^{M_1}_{k_1,k_2}\) is a real matrix if and only if \(\lambda _1,\lambda _2\in {\mathbb {R}}\). Finally, \(\lambda _1\in {\mathbb {R}}\) if and only if \(k_1=0\) and \(\lambda _2\in {\mathbb {R}}\) if and only if \(k_2=0\) and \(\lambda +\mu >1\). \(\square \)
3.2 Lower block
The lower block of F(M) is given by the matrix \(M_2=\begin{pmatrix} \alpha &{}\alpha '\\ \beta '&{}\beta \\ \end{pmatrix}\). Unlike \(M_1\), the matrix \(M_2\) is generally not a Markov matrix. The discriminant of the characteristic polynomial of \(M_2\) is given by
with \(\alpha ,\beta ,\alpha ',\beta '\) defined as in (3.3). If \(\Delta >0\), then \(M_2\) has two distinct real eigenvalues and if \(\Delta <0\), then \(M_2\) has a pair of conjugated complex eigenvalues. Moreover, if \(\Delta =0\), then \(M_2\) has either \(2\times 2\) Jordan block or a repeated real eigenvalue. We will assume that \(\Delta \ne 0\) so that \(M_2\) diagonalizes into two distinct eigenvalues.
Let \(P_2=\begin{pmatrix} \frac{\sqrt{\Delta }+(\alpha \beta )}{2}&{}\frac{\sqrt{\Delta }(\alpha \beta )}{2}\\ \beta '&{}\beta '\\ \end{pmatrix}\). Then
Let us now define
where \(k_3\) and \(k_4\) are integers. Therefore, any logarithm of \(M_2\) can be written as
where
Lemma 4

1.
If \(\Delta >0\), then \(L^{M_2}_{k_3,k_4}\) is a real matrix if and only if \(\alpha +\beta >\sqrt{\Delta }\) and \(k_3=k_4=0.\)

2.
If \(\Delta <0\), then \(L^{M_2}_{k_3,k_4}\) is a real matrix if and only if \(k_4=k_3\).
Proof

1.
If \(\Delta >0\), then \(Im(l_3)=2k_3\pi \) and \(Im(l_4)=2k_4\pi \). Moreover, \(Re(l_3)\ne Re(l_4).\) Since \(l_3\) and \(l_4\) are the eigenvalues of \(L^{M_2}_{k_3,k_4}\), this implies that \(l_3\ne {\overline{l}}_4\). In particular, \(L^{M_2}_{k_3,k_4}\) is a real matrix if and only if both \(l_3\) and \(l_4\) are real.

2.
Let us assume \(\Delta <0\) and take \(z=\frac{(\alpha +\beta )+\sqrt{\Delta }}{2}\). Fixing \(k_3,k_4\in {\mathbb {Z}}\), the eigenvalues of \(L^{M_2}_{k_3,k_4}\) are \(l_3=\log (z)+2k_3\pi i\) and \(l_4=\log ({\overline{z}})+2k_4\pi i=\overline{log(z)}+2k_4\pi i\), which are both complex numbers. Thus, \(L^{M_2}_{k_3,k_4}\) is real if and only if \(l_3={\overline{l}}_4\). Hence, \(k_4=k_3.\) Conversely, \(k_4=k_3\) implies that \(l_3+l_4=2 Re(l_3)\in {\mathbb {R}}\) and \(\frac{l_3l_4}{\sqrt{\Delta }}=\frac{2 Im(l_3)i}{\sqrt{\Delta }}\in {\mathbb {R}}\). Thus, all entries of \(L^{M_2}_{k_3,k_4}\) are real.
\(\square \)
3.3 Logarithms of \(4\times 4\) CS Markov matrices
Let M be a \(4\times 4\) CS Markov matrix. Using the values defined in (3.3) and (3.8), we can now label up its four eigenvalues as following,
We note that the subset of \(4\times 4\) CS Markov matrix with repeated eigenvalues (diagonalizing matrix with repeated eigenvalues or a Jordan block of size greater than 1) have zero measure. Therefore generic \(4\times 4\) Markov matrices have no repeated eigenvalues, and hence we are going to assume the eigenvalues to be distinct. In particular, we are assuming that M diagonalizes. Furthermore, since we want M to have real logarithms and have no repeated eigenvalues, we need the real eigenvalues to be positive.
The following theorem characterizes the embeddability of a \(4\times 4\) CS Markov matrix with positive and distinct eigenvalues. Furthermore, the theorem guarantees that a \(4\times 4\) CS Markov matrix is embeddable if and only if it admits a CS Markov generator. In particular, the characterization of the embeddability of a CS matrix is equivalent when restricting to rate matrices satisfying the symmetries imposed by the model (model embeddability) than when restricting to all possible rate matrices (embedding problem).
Theorem 1
Let M be a diagonalizable \(4\times 4\) CS Markov matrix with positive and distinct eigenvalues \(\lambda _1, \lambda _2, \lambda _3\) defined as in (3.10). Let us define
where \(k=0\) if \(\Delta >0\) and \(k\in {\mathbb {Z}}\) if \(\Delta <0.\) Then any real logarithm of M is given by
where
with \(\lambda ,\ \mu ,\ \alpha ,\ \beta ,\ \alpha ' \text { and } \beta '\) defined as in (3.3) and \(\Delta \) as in (3.8).
In particular, any real logarithm of M is also a \(4\times 4\) CS matrix whose entries \(q_{11},\dots ,q_{24}\) are given by:
Proof
Let us note that
Since we assume that the eigenvalues of M are distinct, according to Proposition 1, any logarithm of M can be written as
The last equation and the fact that S and \(S^{1}\) are real matrices imply that Q will be real if and only if both \(L^{M_1}_{k_1,k_2}\) and \(L^{M_2}_{k_3,k_4}\) are real. Here \(L^{M_1}_{k_1,k_2}\) is the upper block given in (3.7) and \(L^{M_2}_{k_3,k_4}\) is the lower block defined in (3.9). By Lemma 3, \(L^{M_1}_{k_1,k_2}\) being a real logarithm implies that \(k_1=k_2=0\) and \(\lambda +\mu >1\). Then \(L^{M_2}_{k_3,k_4}\) being a real matrix, according to Lemma 4, implies that \(k_3=k_4=0\) if \(\Delta >0\), while \(k_4=k_3\) if \(\Delta <0.\) Therefore, the upper block is \(L^{M_1}_{0,0}\) and the lower block will be \(L^{M_2}_{k,k},\text { for } k=k_3\) completing the proof. \(\square \)
Now we are interested in knowing when the real logarithm of a \(4\times 4\) CS Markov matrix is a rate matrix. Using the same notation as in Theorem 1 we get the following result.
Theorem 2
A diagonalizable \(4\times 4\) CS Markov matrix M with distinct eigenvalues is embeddable if and only if the following conditions hold for \(k=0\) if \(\Delta >0\) or for some \(k\in {\mathbb {Z}}\) if \(\Delta <0\):
Proof
The logarithm of a \(4\times 4\) CS Markov matrix will depend on whether \(\Delta >0\) or \(\Delta <0\). In particular, it will depend on whether the eigenvalues \(\lambda _2\) and \(\lambda _3\) are real and positive or whether they are conjugated complex numbers.

1.
If \(\Delta >0\), then both \(\lambda _2\) and \(\lambda _3\) are real and \(\lambda _2>\lambda _3\). Hence, \(z<y<0.\) Moreover, Lemma 4 implies that \(\lambda _3>0\) and hence \(\lambda _2\lambda _3>0\).

2.
If \(\Delta <0\), then \(\lambda _2,\lambda _3\in {\mathbb {C}}\setminus {\mathbb {R}}\) and \(\lambda _2=\overline{\lambda _3}\). Hence, \(y+z>0\) and \(yz=4\pi ki.\) Moreover, \(\lambda _2\lambda _3=\lambda _3^2>0\) since \(\lambda _3\ne 0\).
Thus, in both cases, \(\alpha _1,\beta _1,\delta (k),\varepsilon (k), \phi (k),\gamma (k)\in {\mathbb {R}}.\) Moreover, \(\alpha _1\) and \(\beta _1\) are both nonpositive. In particular, Theorem 1 together with Proposition 2 imply that a real logarithm of M is a rate matrix if and only if
Furthermore, the conditions \(\lambda _1>0\) comes from Lemma 3. The proof is now complete. \(\square \)
Remark 3
According to Theorem 2 the embeddability of a \(4\times 4\) CS Markov matrix M with distinct positive eigenvalues can be decided by checking six inequalities depending on the entries of M. However, if M has nonreal eigenvalues then one has to check infinitely many groups of inequalities, one for each value of \(k\in {\mathbb {Z}}\). It is enough that one of those systems is consistent to guarantee that M is embeddable. Theorem 5.5 in Casanellas et al. (2023) provides boundaries for the values of k for which the corresponding inequalities may hold.
Let us take a look at the class of K3P matrices which is a special case of strand symmetric matrices. Indeed, for a K3P matrix \(M=(m_{ij})\), we have that
Suppose that a K3PMarkov matrix \(M=(m_{ij})\) is K3Pembeddable, i.e. \(M=\exp (Q)\) for some K3Prate matrix Q. Recall that the eigenvalues of M are
In this case, we have that
In particular, we see that \(\Delta > 0\) unless \(m_{12}=m_{13}\). Moreover,
The inequalities in Theorem 2 can be spelled out as follows:
These inequalities are equivalent to the K3Pembeddability criteria presented in (RocaLacostena and FernándezSánchez 2018b, Theorem 3.1) and (Ardiyansyah et al. 2021, Theorem 1). Moreover, they are also equivalent to the restriction to centrosymmetricmatrices of the embeddability criteria for \(4\times 4\) Markov matrices with different eigenvalues given in (Casanellas et al. 2023, Theorem 1.1)
In the last part of this section, we discuss the rate identifiability problem for \(4\times 4\) centrosymmetric matrices. If a centrosymmetric Markov matrix arises from a continuoustime model, then we want to determine its corresponding substitution rates. In other words, given an embeddable \(4\times 4\) CS matrix, we want to know if we can uniquely identify its Markov generator.
It is worth noting that Markov matrices with repeated real eigenvalues may admit more than one Markov generator (e.g. examples 4.2 and 4.3 in (Casanellas et al. 2020a) show embeddable K2P matrices with more than one Markov generator). Nonetheless, this is not possible if the Markov matrix has distinct eigenvalues, because in this case its only possible real logarithm would be the principal logarithm (Culver 1966). As one considers less restrictions in a model, the measure of the set of matrices with repeated real eigenvalues decreases, eventually becoming a measure zero set. For example, this is the case within the K3P model, where both its submodels (the K2P model and the JC model) consist of matrices with repeated eigenvalues and have positive measure subsets of embeddable matrices with nonidentifiable rates. However, when considering the whole set of K3P Markov matrices, the subset of embeddable matrices with more than one Markov generator has measure zero (see Chapter 4 in (RocaLacostena 2021)). Nevertheless, this behaviour only holds if the Markov matrices within the model have real eigenvalues.
Proposition 3
There is a positive measure subset of \(4\times 4\) CS Markov matrices that are embeddable and whose rates are not identifiable. Moreover, all the Markov generators of the matrices in this set are also CS matrices.
Proof
Given
let us consider the following matrices
A straightforward computation shows that M is a CS Markov matrix and Q is a CS rate matrix. Moreover they both have nonzero entries. By applying the exponential series to Q, we get that \(\exp (Q)=M\). This means that M is embeddable and Q is a Markov generator of M.
Since Q is a rate matrix, so is Qt for any \(t\in {\textbf{R}}_{\ge 0}\). Therefore, \(\exp (Qt)\) is an embeddable Markov matrix, because the exponential of any rate matrix is necessarily a Markov matrix. See (Pachter and Sturmfels 2005, Theorem 4.19) for more details. Moreover, we have that
so \(S^{1} \exp (Qt)S \) is a 2block diagonal matrix. Hence, by Proposition 2 we have that \(\exp (Qt)\) is an embeddable strand symmetric Markov matrix for all \(t\in {\mathbb {R}}_{>0}\).
Now, let us define \(V=P\;diag(0,0,2\pi i, 2\pi i)\;P^{1}\). Note that Q and V diagonalize simultaneously via P and hence they commute. Therefore,
by the BakerCampbellHaussdorff formula. Moreover,
for all \(k \in {\mathbb {Z}}\). Note that kV is a bounded matrix for any given k and hence, given t large enough, it holds that \(Qt+mV\) is a rate matrix for any m between 0 and k.
This shows that, for t large enough, \(\exp (Qt)\) is an embeddable CS Markov matrix with at least \(k+1\) different CS Markov generators. Moreover, \(\exp (Qt)\) and all its generators have no null entries by construction and they can therefore be perturbed as in Theorem 3.3 in Casanellas et al. (2020b) to obtain a positive measure subset of embeddable CS Markov matrices that have \(k+1\) CS Markov generators. \(\square \)
Remark 4
The perturbation presented in Theorem 3.3 in Casanellas et al. (2020b) consists of small changes on the real and complex parts of the eigenvalues and eigenvectors of M other than the eigenvector \((1,\dots ,1)\) and its corresponding eigenvalue 1. If those changes are small enough then the resulting transition matrix and all its generators still satisfy the stochastic constraints of Markov/rate matrices.
Remark 5
Using the same notation as in the proposition above and given \(C\in GL_2({\mathbb {C}})\), let us define
Since \(Q(I_2)=Q\) is a CS rate matrix with no null entries, so is Q(C) for \(C\in GL_2({\mathbb {C}})\) close enough to \(I_2\). Moreover, by construction we have that \(\exp (2tQ(C))=\exp (2tQ)\) for all \(t\in {\mathbb {N}}\). Therefore, for \(t\in {\mathbb {N}}\) we have that \(\exp (2tQ)\) has uncountably many Markov generators (i.e. 2tQ(C) with C close to \(I_2\)) and all of them are CS matrices (Culver 1966, Corollary 1). It is worth noting that according to (Culver 1966, Corollary 1), if a matrix has uncountably many logarithms, then it necessarily has repeated real eigenvalues. Therefore, the subset of embeddable CS Markov matrices with uncountably many generators has measure zero within the set of all matrices.
4 Volumes of \(4\times 4\) CS Markov matrices
In this section, we compute the relative volumes of embeddable \(4\times 4\) CS Markov matrices within some meaningful subsets of Markov matrices. The aim of this section is to describe how large the different sets of matrices are compared to each other.
Let \(V^{Markov}_4\) be the set of all \(4\times 4\) CS Markov matrices. We use the following description
More explicitly, we identify the \(4\times 4\) CS Markov matrix
with a point \((b,c,d,e,g,h)\in V^{Markov}_4.\) Let \(V_+\) be the set of all CS Markov matrices having real positive eigenvalues, where
is the discriminant of the matrix \(M_2\) as stated in Sect. 3. We have \(V_+\subseteq V^{Markov}_4\). More explicitly,
Let \(V_{em+}\) be the set of all embeddable \(4\times 4\) CS Markov matrices with four distinct real positive eigenvalues. We have \(V_{em+}\subseteq V_+\). Therefore, by Theorem 2,
Finally, we consider the following two biologically relevant subsets of \(V^{Markov}_4\). Let \(V_{\mathrm{{DLC}}}\) be the set of diagonally largest in column (DLC) Markov matrices, which is the subset of \(V^{Markov}_4\) containing all CS Markov matrices such that the diagonal element is the largest element in each column. These matrices are related to matrix parameter identifiability in phylogenetics (Chang 1996). Secondly, we let \(V_{\mathrm{{DD}}}\) be the set of diagonally dominant (DD) Markov matrices, which is the subset of \(V^{Markov}_4\) matrices containing all CS Markov matrices such that in each row the diagonal element is at least the sum of all the other elements in the row. Biologically, the subspace \(V_{\mathrm{{DD}}}\) consists of matrices with probability of not mutating at least as large as the probability of mutating. If a diagonally dominant matrix is embeddable, it has an identifiable rate matrix (Cuthbert 1972; James 1973). By the definition of each set, we have the inclusion \(V_{\mathrm{{DD}}}\subseteq V_{\mathrm{{DLC}}}\).
Remark 6
The sets \(V_+\), \(V_{em+}\), \(V_{\mathrm{{DLC}}}\), \(V_{\mathrm{{DD}}}\) that we consider in this section are all subsets of the set \(V^{Markov}_4\) of all \(4\times 4\) CS Markov matrices, but we can use the same definition to refer to the equivalent subsets of \(n\times n\) CS Markov matrices. Therefore, we will use the same notation \(V_+\), \(V_{em+}\), \(V_{\mathrm{{DLC}}}\), \(V_{\mathrm{{DD}}}\) to refer to the equivalent subsets of the set \(V^{Markov}_n\) of \(n\times n\) CS Markov matrices without confusion in the following sections.
In the rest of this section, the number v(A) denotes the Euclidean volume of the set A. By definition, \(V^{Markov}_4\), \(V_{\mathrm{{DLC}}}\) and \(V_{\mathrm{{DD}}}\) are polytopes, since they are defined by the linear inequalities in \({\mathbb {R}}^6\). Hence, we can use Polymake (Gawrilow and Joswig 2000) to compute their exact volumes and obtain that
Hence, we see that \(V_{\mathrm{{DLC}}}\) and \(V_{\mathrm{{DD}}}\) constitute roughly only \(6.25\%\) and \(1.56\%\) of \(V_4^{Markov}\), respectively.
On the other hand, we will estimate the volume of the sets \(V_+, V_{em+}, V_{\mathrm{{DLC}}}\cap V_+,V_{\mathrm{{DLC}}}\cap V_{em+},V_{\mathrm{{DD}}}\cap V_+, \text{ and } V_{\mathrm{{DD}}}\cap V_{em+}\) using the hitandmiss Monte Carlo integration method (Hammersley 2013) with sufficiently many sample points in Mathematica (Inc. 2022). Theoretically, Theorem 2 enables us to compute the exact volume of these relevant sets. For example in the case of K3P matrices, such exact computation of volumes has been feasible in RocaLacostena and FernándezSánchez (2018b). However, while for the K3P matrices, the embeddability criterion is given by three quadratic polynomial inequalities, in the case of CS matrices the presence of nonlinear and nonpolynomial constraints imposed on each set, makes the exact computation of the volume of these sets intractable. Therefore, we need to approximate the volume of these sets. Given a subset \(A\subseteq V_4^{Markov}\), the volume estimate of v(A) computed using the hitandmiss Monte Carlo integration method with n sample points is given by the number of points belonging to A out of n sample points. For computational purposes, in the formula of \(\phi (0)\) and \(\varepsilon (0)\), we use the fact that
All codes for the computations implemented Mathematica and Polymake can be found at the following address: https://github.com/ardiyam1/Embeddabilityandrateidentifiabilityofcentrosymmetricmatrices.
The results of these estimations using the hitandmiss Monte Carlo integration implemented in Mathematica with n sample points are presented in Table 1, while Table 2 provides an estimated volume ratio between relevant subsets of centrosymmetric Markov matrices using again the hitandmiss Monte Carlo integration with n sample points. In Table 1, we firstly generate n centrosymmetric matrices whose offdiagonal entries were sampled uniformly in [0, 1] and forced the rows of the matrix to sum to one. Out of these n matrices, we test how many of them are actually Markov matrices (i.e. the diagonal entries are nonnegative) and then out of these how many have positive eigenvalues. In particular, for \(n=10^7\) sample points containing 277628 centrosymmetric Markov matrices, Table 2 suggests that there are approximately \(1.7\%\) of centrosymmetric Markov matrices with distinct positive eigenvalues that are embeddable. Moreover, we can see that for \(n=10^7\), out of all embeddable centrosymmetric Markov matrices with distinct positive eigenvalues, almost all are diagonally largest in column, while only \(28\%\) are diagonally dominant.
An alternative approach for approximating the number of embeddable matrices within the model is to use Algorithm 5.8 in Casanellas et al. (2023) to test the embeddability of the sample points. Tables 4 and 5 below are analogous to Tables 1 and 2, but Table 4 was obtained using the sampling method in (RocaLacostena 2021, Appendix A), while using either Algorithm 5.8 in Casanellas et al. (2023) or the inequalities in Theorem 2 yields identical results which are provided in Tables 4 and 5.
We used the python implementation of Algorithm 5.8 in Casanellas et al. (2023) provided in (RocaLacostena 2021, Appendix A) and modified it to sample on the set of \(4 \times 4\) CS Markov matrices with positive eigenvalues. The original sampling method used in (RocaLacostena 2021, Appendix A) consisted of sampling uniformly on the set of \(4\times 4\) centrosymmetricMarkov matrices until we obtained n samples (or as many samples as we require) with positive eigenvalues.
Despite the fact that Theorem 2 and Algorithm 5.8 in Casanellas et al. (2023) were originally implemented using different programming languages (Wolfram Mathematica and Python respectively) and were tested with different sample sets, the results obtained are quite similar as illustrated by Tables 2 and 5. In fact, when we apply both Algorithm 5.8 in Casanellas et al. (2023) and Theorem 2 on the same sample set in Table 3, we obtain identical results which are displayed in Tables 4 and 5.
It is worth noting that the embeddability criteria given in Theorem 2 use inequalities depending on the entries of the matrix, whereas Algorithm 5.8 in Casanellas et al. (2023) relies on the computation of its principal logarithm and its eigenvalues and eigenvector, which may cause numerical issues when working with matrices with determinant close to 0. Moreover, the computation of logarithms can be computationally expensive. As a consequence, the algorithm implementing the criterion for embeddability arising from Theorem 2 is faster. Table 6 shows the running times for the implementation of both embeddability criteria used to obtain Table 5.
The Python implementation of Algorithm 5.8 in Casanellas et al. (2023) provided in (RocaLacostena 2021, Appendix A) can also be used to test the embeddability of any \(4\times 4\) CS Markov matrix (including those with nonreal eigenvalues) without modifying the embeddability criteria. To do so, it is enough to apply the algorithm to a set of Markov matrices with different eigenvalues sampled uniformly from the set of all \(4\times 4\) CS Markov matrix. As hinted in Remark 3, this would also be possible using the embedability criterion in Theorem 2 together with the boundaries for k provided in (Casanellas et al. 2023, Theorem 5.5). Table 7 shows the results obtained when applying Algorithm 5.8 in Casanellas et al. (2023) to a set of \(10^7\) \(4\times 4\) CS Markov matrices sampled uniformly.
As most DLC and DD matrices have positive eigenvalues, the proportion of embeddable matrices within these subsets is almost the same when admitting matrices with nonpositive eigenvalues (as in Table 7 instead of only considering matrices with positive eigenvalues as we did in Tables 2 and 5. On the other hand, the proportion of \(4\times 4\) embeddable CS matrices is much smaller in this case.
5 Centrosymmetric matrices and generalized Fourier transformation
In Sects. 3 and 4 we have seen the embeddability criteria for \(4\times 4\) centrosymmetric Markov matrices and the volume of their relevant subsets. In this section, we are extending this framework to larger matrices. The importance of this extension is relevant to the goal of synthetic biology which aims to expand the genetic alphabet. For several decades, scientists have been cultivating ways to create novel forms of life with basic biochemical components and properties far removed from anything found in nature. In particular, they are working to expand the number of amino acids which is only possible if they are able to expand the genetic alphabet (see for example (Hoshika et al. 2019)).
5.1 Properties of centrosymmetric matrices
For a fixed \(n\in {\mathbb {N}}\), let \(V_n\) denote the set of all centrosymmetric matrices of order n. Moreover, let \(V_n^{Markov}\) and \(V_n^{rate}\) denote the set of all centrosymmetric Markov and rate matrices of order n, respectively. As a subspace of the set of all \(n\times n\) real matrices, for n even, dim\((V_n)=\frac{n^2}{2}\) while for n odd, dim\((V_n)=\lfloor \frac{n}{2}\rfloor (n+1)+1\). We will now mention some geometric properties of the sets \(V_n^{Markov}\) and \(V_n^{rate}.\) Furthermore, for any real number x, \(\lfloor x\rfloor \) and \(\lceil x\rceil \) denote the floor and the ceiling function of x, respectively.
Proposition 4

1.
For n even, \(V_n^{Markov}\subseteq {\mathbb {R}}^{\frac{n(n1)}{2}}_{\ge 0}\) is a Cartesian product of \(\frac{n}{2}\) standard \((n1)\)simplices and its volume is \(\frac{1}{(n1)!^{\frac{n}{2}}}.\) For n odd, \(V_n^{Markov}\subseteq {\mathbb {R}}^{\lfloor \frac{n}{2}\rfloor n}_{\ge 0}\) is a Cartesian product of \(\lfloor \frac{n}{2}\rfloor \) standard \((n1)\)simplices and the \(\lfloor \frac{n}{2}\rfloor \)simplex with vertices \(\{0, \frac{e_i}{2}\}_{1\le i\le \lfloor \frac{n}{2}\rfloor }\cup \{e_{\lfloor \frac{n}{2}\rfloor +1}\}\), where \(e_i\) is the ith standard unit vector in \({\mathbb {R}}^n\). Hence, the volume of \(V_n^{Markov}\) is \(\frac{1}{2^{\lfloor \frac{n}{2}\rfloor }(\lfloor \frac{n}{2}\rfloor )!(n1)!^{\lfloor \frac{n}{2}\rfloor }}\).

2.
For n even, \(V_n^{rate}={\mathbb {R}}_{\ge 0}^{\frac{n(n1)}{2}}\) and for n odd, \(V_n^{rate}={\mathbb {R}}_{\ge 0}^{\lfloor \frac{n}{2}\rfloor n}\).
Proof
Here we consider the following identification for an \(n\times n\) centrosymmetric matrix M. For n even, M can be thought as a point \((M_1,\dots , M_{\frac{n}{2}})\in ({\mathbb {R}}^n_{\ge 0})^{\frac{n}{2}}\) where the point \(M_i\in {\mathbb {R}}^n_{\ge 0}\) corresponds to the ith row of M. Similarly, for n odd, we identify M as a point in \(({\mathbb {R}}^n_{\ge 0})^{\lfloor \frac{n}{2}\rfloor }\times {\mathbb {R}}^{\lfloor \frac{n}{2}\rfloor +1}_{\ge 0}\). Since M is a Markov matrix, under this identification, each point \(M_i\) lies in some simplices. Therefore, \(V^{Markov}_n\) is a Cartesian product of some simplices. For n even, these simplices are the standard \((n1)\)dimensional simplex:
For n odd and \(1\le i\le \lfloor \frac{n}{2}\rfloor \), the point \(M_i\) belongs to standard \((n1)\)simplex above and the point \(M_{\lfloor \frac{n}{2}\rfloor +1}\) belongs to the simplex
We now compute the volume of \(V^{Markov}_n\). Let us recall the fact that the volume of the Cartesian product of spaces is equal to the product of volumes of each factor space if the volume of each factor space is bounded. Moreover, the \((n1)\)dimensional volume of the standard simplex in Eq. (5.1) in \({\mathbb {R}}^{n1}\) is \(\frac{1}{(n1)!}.\) For n even, the statement follows immediately. For n odd, we use the fact that the \(\lfloor \frac{n}{2}\rfloor \)dimensional volume of the simplex in Eq. (5.2) is \(\frac{1}{2^{\lfloor \frac{n}{2}\rfloor }(\lfloor \frac{n}{2}\rfloor )!}.\) We refer the reader to Stein (1966) for an introductory text on the volume of simplices.
For the second statement, we use the fact that if Q is a rate matrix, then \(q_{ii}=\sum _{j\ne i}q_{ij}\) where \(q_{ij}\ge 0\) for \(i\ne j\). \(\square \)
In the rest of this section, let \(J_n\) be the \(n\times n\) antidiagonal matrix, i.e. the (i, j)entries are one if \(i+j= n+1\) and zero otherwise. The following proposition provides some properties of the matrix \(J_n\) that can be checked easily.
Proposition 5
Let \(A=(a_{ij})\in M_n({\mathbb {R}})\). Then

1.
\((AJ_n)_{ij}=a_{i,n+1j} \text{ and } (J_nA)_{ij}=a_{n+1i,j}.\)

2.
A is a centrosymmetric matrix if only if \(J_nAJ_n=A.\)
In Sect. 3, we have seen that \(4\times 4\) CS matrices can be blockdiagonalized through the matrix S. Now we will present a construction of generalized Fourier matrices to blockdiagonalize any centrosymmetric matrices. Let us consider the following recursive construction of the \(n\times n\) matrix \(S_n\):
Proposition 6
For each natural number \(n\ge 3\), \(S_n\) is invertible and its inverse is given by
Proof
The proposition easily follows from the definition of \(S_n\). Indeed, we have
\(\square \)
The following proposition provides another block decomposition of the matrix \(S_n\) and its inverse.
Proposition 7
Let \(n\ge 2.\)

1.
For n even, \(S_n=\begin{pmatrix} I_{\frac{n}{2}}&{}J_{\frac{n}{2}}\\ J_{\frac{n}{2}}&{}I_{\frac{n}{2}}\\ \end{pmatrix}\), while for n odd, \(S_n=\begin{pmatrix} I_{\lfloor \frac{n}{2}\rfloor }&{}0&{}J_{\lfloor \frac{n}{2}\rfloor }\\ 0&{}1&{}0\\ J_{\lfloor \frac{n}{2}\rfloor }&{}0&{}I_{\lfloor \frac{n}{2}\rfloor }\\ \end{pmatrix}.\)

2.
Using these block partitions, \(S_n^{1}=\frac{1}{2}S_n\) for n even, while \(S_n^{1}=\begin{pmatrix} \frac{1}{2}I_{\lfloor \frac{n}{2}\rfloor }&{}0&{} \frac{1}{2}J_{\lfloor \frac{n}{2}\rfloor }\\ 0&{}1&{}0\\ \frac{1}{2}J_{\lfloor \frac{n}{2}\rfloor }&{}0&{} \frac{1}{2}I_{\lfloor \frac{n}{2}\rfloor }\\ \end{pmatrix}\) for n odd.
Proof
The proof follows from induction on n and the fact that \(J_n^2=I_n\). \(\square \)
We will call a vector \(v\in {\mathbb {R}}^n\) symmetric if \(v_i=v_{n+1i}\) for every \(1\le i\le n\), i.e. \(J_n v=v.\) Moreover, we call a vector \(w\in {\mathbb {R}}^n\) antisymmetric if \(v_i=v_{n+1i}\) for every \(1\le i\le n\), i.e. \(J_n v=v.\) The following technical proposition will be used in what follows in order to simplify a centrosymmetric matrix.
Proposition 8
Let \(n\ge 2\). Let \(v\in {\mathbb {R}}^n\) be a symmetric vector and \(w\in {\mathbb {R}}^n\) be an antisymmetric vector.

1.
The last \(\lfloor \frac{n}{2}\rfloor \) entries of \(S_nv\) and \(v^TS_n\) are zero. Similarly, the last \(\lfloor \frac{n}{2}\rfloor \) entries of \(S_n^{1}v\) and \(v^TS_n^{1}\) are zero.

2.
The first \(\lfloor \frac{n}{2}\rfloor \) entries of \(S_nw\) and \(w^TS_n\) are zero. Similarly, the first \(\lfloor \frac{n}{2}\rfloor \) entries of \(S_n^{1}w\) and \(w^TS_n^{1}\) are zero.

3.
Then the sum of the entries of \(S_nv\) and \(v^TS_n\) is the sum of the entries of v.

4.
Then the sum of the entries of \(S_n^{1}v\) and \(v^TS_n^{1}\) is the sum of the first \(\lceil \frac{n}{2}\rceil \) entries of v.
Proof
We will only prove the first part of item (1) in the proposition using mathematical induction on n. The base case for \(n=2\) can be easily obtained. Suppose now that the proposition holds for all \(k<n.\) Let \(v=\begin{pmatrix}v_1\\ v'\\ v_1 \end{pmatrix}\in {\mathbb {R}}^n\) be a symmetric element. Then \(v'\in {\mathbb {R}}^{n2}\) is also symmetric. By direct computation we obtain
The last \(\lfloor \frac{n2}{2}\rfloor \) entries of \(S_{n2}v'\) are zero. Thus, the last \(\lfloor \frac{n2}{2}\rfloor +1=\lfloor \frac{n}{2}\rfloor \) entries of \(S_nv\) are zero as well. The proof of the other statements can be obtained analogously using induction. In particular, let us note that the proof given for item (1) directly implies item (3). \(\square \)
For a fixed number n, let us define the following map:
For \(n=4,\) we have seen that if A is a CS matrix, then \(F_4(A)\) is a blockdiagonal matrix where each block is of size \(2\times 2\) and is given by \(A_1\) and \(A_2\). Moreover, the upper block is a Markov matrix. The following lemma provides a generalization to these results.
Lemma 5
Let \(n\ge 2.\) Given an \(n\times n\) CS matrix A, \(F_n(A)\) is the following blockdiagonal matrix
where \(A_1\) is a matrix of size \(\lceil \frac{n}{2}\rceil \times \lceil \frac{n}{2}\rceil \). Furthermore, if A is a Markov (rate) matrix, then \(A_1\) is also a Markov (rate) matrix.
Proof
First suppose that n is even. By (Cantoni and Butler 1976, Lemma 2), we can partition A into the following block matrices:
where \(B_1\) and \(B_2\) are of size \(\lfloor \frac{n}{2}\rfloor \times \lfloor \frac{n}{2}\rfloor .\) By Proposition 7, we have
Choose \(A_1=B_1+B_2J_{\frac{n}{2}}\). Now suppose that A is a Markov matrix. This means that each row of A sums to 1 and A has nonnegative entries. Therefore, for \(1\le k\le \frac{n}{2}\), we have
and for \(1\le j\le \frac{n}{2}\), \((B_1+B_2J_{\frac{n}{2}})_{kj}=a_{kj}+a_{k,\frac{n}{2}+j}\ge 0.\)
Now we consider the case when n is odd. Again by (Cantoni and Butler 1976, Lemma 2), we can partition A into the following block matrices:
where \(B_1, B_2 \in M_{\lfloor \frac{n}{2}\rfloor \times \lfloor \frac{n}{2}\rfloor }({\mathbb {R}}) \), \(p \text{ and } q\in M_{1\times \lfloor \frac{n}{2}\rfloor }({\mathbb {R}})\) and \(r\in M_{1 \times 1}({\mathbb {R}}).\) By Proposition 7, we have
In this case, choose \(A_1=\begin{pmatrix} B_1+B_2J_{\lfloor \frac{n}{2}\rfloor }&{}p\\ 2q&{}r\\ \end{pmatrix}.\) Suppose that A is a Markov matrix. Since each row of A sums to 1, we have
and for \(1\le k\le \lfloor \frac{n}{2}\rfloor \),
From the fact that the entries of A are nonnegative, for \(1\le k, j\le \lfloor \frac{n}{2}\rfloor \), we obtain that
Therefore, all entries of \(A_1\) sum to 1 and are nonnegative meaning that \(A_1\) is a Markov matrix as well. We can proceed similarly for the case when A is a rate matrix. \(\square \)
Lemma 6
For any natural number n, let \(A_1=(\alpha _{i,j}),\) \(A_2=(\beta _{i,j}) \in M_{\lceil \frac{n}{2}\rceil \times \lceil \frac{n}{2}\rceil }({\mathbb {R}})\). Suppose that \(Q=diag(A_1,A_2)\) is a block diagonal matrix. Then

1.
\(F_n^{1}(Q):=S_nQS_n^{1}\) is a CS matrix.

2.
\(F_n^{1}(Q)\) is a Markov matrix if and only if \(A_1\) is a Markov matrix and for any \(1\le i,j\le \lfloor \frac{n}{2}\rfloor ,\)
$$\begin{aligned} \alpha _{ij}+\beta _{\lfloor \frac{n}{2}\rfloor +1i,\lfloor \frac{n}{2}\rfloor +1j}\ge 0 \text{ and } \alpha _{i,\lfloor \frac{n}{2}\rfloor +1j}\beta _{\lfloor \frac{n}{2}\rfloor +1i,j}\ge 0. \end{aligned}$$ 
3.
\(F_n^{1}(Q)\) is a rate matrix if and only if \(A_1\) is a rate matrix and for any \(1\le i,j\le \lfloor \frac{n}{2}\rfloor ,\) such that for \(i=j\), \(\alpha _{ii}+\beta _{\lfloor \frac{n}{2}\rfloor +1i,\lfloor \frac{n}{2}\rfloor +1i}\le 0\) and for \(i\ne j\),
$$\begin{aligned} \alpha _{ij}+\beta _{\lfloor \frac{n}{2}\rfloor +1i,\lfloor \frac{n}{2}\rfloor +1j}\ge 0 \text{ and } \alpha _{i,\lfloor \frac{n}{2}\rfloor +1j}\beta _{\lfloor \frac{n}{2}\rfloor +1i,j}\ge 0. \end{aligned}$$
Proof
We will only prove the lemma for n even. Similar arguments will work for n odd as well. By Proposition 7,
Since \(J_{\frac{n}{2}}(A_1+J_{\frac{n}{2}}A_2J_{\frac{n}{2}})J_{\frac{n}{2}}=J_{\frac{n}{2}}A_1J_{\frac{n}{2}}+A_2\) and \(J_{\frac{n}{2}}(A_1J_{\frac{n}{2}}J_{\frac{n}{2}}A_2)J_{\frac{n}{2}}= J_{\frac{n}{2}}A_1A_2J_{\frac{n}{2}}\), then by (Cantoni and Butler 1976, Lemma 2), \(F_n^{1}(Q)\) is centrosymmetric which proves (1). For \(1\le i\le \frac{n}{2}\),
The above equality means that for \(1\le i\le \frac{n}{2}\), the ith row sum of \(F_n^{1}(Q)\) and \(A_1\) coincide. This implies that if \(F_n^{1}(Q)\) is a Markov (rate) matrix, then \(A_1\) is a Markov (rate) matrix as well. Additionally, note that
Hence, (2) and (3) will follow immediately.\(\square \)
5.2 Logarithms of centrosymmetric matrices
For the special structure encoded by the centrosymmetric matrices, one may ask whether they have logarithms which are also centrosymmetric. In this section, we provide some answers to this question.
Theorem 3
Let \(A\in M_n({\mathbb {R}})\) be a CS matrix. Then A has a CS logarithm if and only if both the upper block matrix \(A_1\) and the lower block matrix \(A_2\) in Lemma 5 admit a logarithm.
Proof
Suppose that A has a centrosymmetric logarithm Q. By Lemma 5, \(F_n(A)=\text{ diag }(A_1,A_2)\) and \(F_n(Q)=\text{ diag }(Q_1,Q_2).\) Then \(\exp (Q)=A\) implies that \(\exp (Q_1)=A_1\) and \(\exp (Q_2)=A_2.\) Hence, \(A_1\) and \(A_2\) admit a logarithm. Conversely, suppose that \(A_1\) and \(A_2\) admit a logarithm \(Q_1\) and \(Q_2\), respectively. Then the matrix \(\text{ diag }(Q_1,Q_2)\) is a logarithm of the matrix \(\text{ diag }(A_1,A_2)\). By Lemma 6, the matrix \(F_n^{1}(\text{ diag }(Q_1,Q_2))\) is a centrosymmetric logarithm of A. \(\square \)
Proposition 9
Let \(A\in M_n({\mathbb {R}})\) be a CS matrix. If A is invertible, then it has infinitely many CS logarithms.
Proof
The assumptions imply that the matrices \(A_1\) and \(A_2\) in Lemma 5 are invertible. By (Higham 2008, Theorem 1.28), each \(A_1\) and \(A_2\) has infinitely many logarithms. Hence, Theorem 3 implies that A has infinitely many centrosymmetric logarithms. \(\square \)
Proposition 10
Let \(A\in M_n({\mathbb {R}})\) be a CS matrix such that Log(A) is welldefined. Then Log(A) is again centrosymmetric.
Proof
Let us suppose that Log(A) is not centrosymmetric matrix. Define the matrix \(Q=J_n(Log(A))J_n\). Then \(Q\ne Log(A)\) since Log(A) is not centrosymmetric. It is also clear that \(\exp (Q)=A\). Moreover, since \(J_n^2=I_n\), the matrices Log(A) and Q have the same eigenvalues. Therefore, Q is also a principal logarithm of A, a contradiction to the uniqueness of principal logarithm. Hence, Log(A) must be centrosymmetric. \(\square \)
The following theorem characterizes the logarithms of any invertible CS Markov matrices.
Theorem 4
Let \(A\in M_n({\mathbb {R}})\) be an invertible CS Markov matrix. Let \(A_1=N_1D_1N_1^{1}\) where \(D_1=diag(R_1,R_2,\dots , R_l)\) is a Jordan form of the upper block matrix in Lemma 5. Similarly, let \(A_2=N_2D_2N_2^{1}\) where \(D_2=diag(T_1,T_2,\dots , T_l)\) is a Jordan form of the lower block matrix in Lemma 5. Then A has a countable infinitely many logarithms given by
where
and \(D_i'\) denotes a logarithm of \(D_i\). In particular, these logarithms of A are primary functions of A.
Proof
The theorem follows immediately from (Higham 2008, Theorem 1.28). \(\square \)
For the definition of primary function of a matrix, we refer the reader to Higham (2008). The above theorem says that the logarithms of a nonsingular centrosymmetric matrix contains a countable infinitely many primary logarithms and they are centrosymmetric matrices as well.
Finally, we will present a necessary condition for embeddability of CS Markov matrices in higher dimensions.
Lemma 7
Let \(n\ge 2\). Suppose that \(A=(a_{ij})\) is an embeddable CS Markov matrix of size \(n\times n\) with a CS logarithm. Then for n even,
while for n odd,
Proof
Since A is an embeddable matrix with CS logarithm, we write \(A=\exp (Q)\) for some CS rate matrix Q, and then
By Lemma 5, for the centrosymmetric matrices A, Q, we have \(F_n(A)=diag (A_1,A_2)\) and \(F_n(Q)=diag (Q_1,Q_2)\) where \(A_1\) is a Markov matrix and \(Q_1\) is a rate matrix of size \(\lceil \frac{n}{2}\rceil \times \lceil \frac{n}{2}\rceil .\) Therefore, \(A_1=\exp (Q_1).\) If \(\lambda _1,\cdots , \lambda _{\lceil \frac{n}{2}\rceil }\) are the eigenvalues, perhaps not distinct, of \(Q_1,\) then the eigenvalues of \(A_1\) are \(e^{\lambda _1},\cdots ,e^{\lambda _{\lceil \frac{n}{2}\rceil }}\). Since one of \(\lambda _i\)’s is zero, then the trace of \(A_1\) which is the sum of its eigenvalues is equal to
We now need to show that trace of \(A_1\) has the form written in the lemma. Suppose that n is even. By the proof of Lemma 5, then
The proof for odd n can be obtained similarly.\(\square \)
Let \(X_n\subseteq V_n^{Markov}\) be the subset containing all centrosymmetricembeddable Markov matrices. We want to obtain an upper bound of the volume of \(X_n\) using Lemma 7. Let \(Y_n\subseteq V_n^{Markov}\) be the subset containing all centrosymmetric Markov matrices such that after applying the generalized Fourier transformation, the trace of the upper block matrix is greater than 1. The previous lemma implies that \(X_n\subseteq Y_n\) and hence, \(v(X_n)\le v(Y_n)\). Moreover, the upper bound \(v(Y_n)\) is easy to compute as \(Y_n\) is a polytope and for some values of n, these volumes are presented in Table 8. We see from Table 8, there are at most \(50\%\) of matrices in \(V_4^{Markov}\) that are centrosymmeticallyembeddable and hence this upper bound \(v(Y_4)\) is not good. For \(n=5\), approximately, there are at most \(62\%\) in \(V_4^{Markov}\) that are centrosymmetricallyembeddable but for \(n=6\), this upper bound gives a better proportion, which is approximately \(0.1\%\).
6 Embeddability of \(6\times 6\) centrosymmetric matrices
Throughout this section we shall consider A to be a \(6 \times 6\) centrosymmetric Markov matrix with distinct eigenvalues. In particular, the matrices considered in this section are diagonalizable and are a dense subset of all \(6 \times 6\) centrosymetric Markov matrices. Note that this notation differs from the notation for Markov matrices used in previous sections in order to make it consistent with the notation used in the results presented for generic centrosymmetric matrices.
In the previous section, we showed that F(A) is a blockdiagonal real matrix composed of two \(3\times 3\) blocks denoted by \(A_1\) and \(A_2\). Since both \(A_1\) and \(A_2\) have real entries, each of these matrices has at most one conjugate pair of eigenvalues. Adapting the notation introduced in Theorem 4 to diagonalizable matrices we have \(N_1,N_2 \in GL_3({\mathbb {C}})\) such that \(A_1=N_1 diag(1,\lambda _1, \lambda _2) N_1^{1}\) and \(A_2=N_2 diag(\mu ,\gamma _1, \gamma _2) N_2^{1}\) with \(\mu \in {\mathbb {R}}_{>0}\) and \(\lambda _i,\gamma _i \in {\mathbb {C}}{\setminus } {\mathbb {R}}_{\ge 0}\). Moreover, we can assume that \(Im(\lambda _1)>0\) without loss of generality (this can be achieved by permuting the second and third columns of \(N_1\) if necessary). For ease of reading, we will define as \(P:= S_6 diag(N_1,N_2)\), where \(S_6\) is the matrix used to obtain the Fourier transform F(A) and was introduced in Sect. (5.3).
Next we give a criterion for the embeddability of A for each of the following cases:
Proposition 11
If a \(6\times 6\) cetrosymmetric Markov matrix A does not belong to any of the cases in Table 6.1, then it is not embeddable.
Proof
If A satifies the hypothesis of the proposition then either it has a null eigenvalue or it has a simple negative eigenvalue. In the former case A is a singular matrix and hence it has no logarithm. If A had a simple negative eigenvalue, then all its logarithms would have a nonreal eigenvalue whose complementary pair is not an eigenvalue of A (otherwise M would have a repeated eigenvalues). Therefore, A has no real logarithm.
\(\square \)
Remark 7
All the results in this section can be adapted to \(5\times 5\) centrosymmetric Markov matrices by not considering the eigenvalue \(\mu \) and modifying the forthcoming definitions of the matrices \(Log_{1}(A)\) and V accordingly (i.e. removing the fourth row and column in the corresponding diagonal matrix). In addition, these results still hold if the eigenvalue 1 of the Markov matrix has multiplicity 2.
6.1 Case 1
The results for this case are not restricted to centrosymmetric matrices but can be applied to decide the embeddability of any suitable Markov matrix.
Proposition 12
If all the eigenvalues of a Markov matrix A are distinct and positive, then A is embeddable if and only if Log(A) is a rate matrix.
Proof
If A has distinct real eigenvalues then it has only one real logarithm, which is Log(A) (see (Culver 1966)). \(\square \)
6.2 Case 2
In this case A has exactly one conjugate pair of complex eigenvalues and we obtain the following criterion by adapting Corollary 5.6 in Casanellas et al. (2023) to our framework:
Proposition 13
Given the matrix \(V:= P \; diag(0,0,0,0,2\pi i, 2\pi i) \;P^{1}\) define:
and set \( \ {\mathcal {N}}:= \{(i,j): i\ne j, \ V_{i,j}=0 \text { and } Log(A)_{i,j}<0\}.\) Then,

1.
A is embeddable if and only if \({\mathcal {N}} = \emptyset \) and \( {\mathcal {L}} \le {\mathcal {U}}\).

2.
the set of Markov generators for A is \(\Big \{ Q=Log(A) + k V: k\in {\mathbb {Z}} \text { such that } {\mathcal {L}} \le k \le {\mathcal {U}} \Big \}\).
Proof
The proof of this theorem is analogous to the proof of Theorem 5.5 in Casanellas et al. (2020a) but considering the matrix V as defined here. According to Proposition 1, any Markov generator of A is of the form
Such a logarithm can be rewritten as \(Log(A)+kV\). Using this, we will prove that \(Log_k(A)=Log(A)+kV\) is a rate matrix if and only if \({\mathcal {N}} = \emptyset \) and \({\mathcal {L}} \le k \le {\mathcal {U}}\).
Suppose that there exists \(k\in {\mathbb {Z}}\) such that \(Log_k(A)\) is a rate matrix. Hence, \(Log(A)_{i,j}+kV_{i,j}\ge 0\) for all \(i\ne j\). For \(i\ne j\), we have:

(a)
\(Log(A)_{i,j}\ge 0\) for all \(i\ne j\) such that \(V_{i,j}=0\). This means that \({\mathcal {N}}=\emptyset .\)

(b)
\(\frac{Log(A)_{i,j}}{V_{i,j}}\le k\) for all \(i\ne j\) such that \(V_{i,j}>0\). This means that \({\mathcal {L}}\le k\).

(c)
\(\frac{Log(A)_{i,j}}{V_{i,j}}\ge k\) for all \(i\ne j\) such that \(V_{i,j}<0\). This means that \(k\le {\mathcal {U}}\).
Conversely, suppose that \({\mathcal {N}} = \emptyset \) and and that there is \(k\in {\mathbb {Z}}\) such that \({\mathcal {L}} \le k \le {\mathcal {U}}\). We want to check that \(Log_k(A)\) is a rate matrix. According to Proposition 1, each row of \(Log_k(A)\) sums to 0. Moreover, for \(i\ne j\), we have:

(a)
if \(V_{i,j}\!=\!0\), then \(Log_k(A)_{i,j}\!=\!Log(A)_{i,j}\). Since \({\mathcal {N}}\!=\!\emptyset \), \(Log_k(A)_{i,j}\!=\!Log(A)_{i,j}\ge 0\).

(b)
if \(V_{i,j}\!>\!0\), then \(Log_k(A)_{i,j}\!=\!Log(A)_{i,j}\!+\!k V_{i,j}\!\ge \! Log(A)_{i,j}\!+\!{\mathcal {L}} V_{i,j}\!\ge \! Log(A)_{i,j}+(\frac{Log(A)_{i,j}}{V_{i,j}}) V_{i,j}=0.\)

(c)
if \(V_{i,j}<0\), then \(Log_k(A)_{i,j}=Log(A)_{i,j}k V_{i,j}\le Log(A)_{i,j}{\mathcal {U}} V_{i,j}\le Log(A)_{i,j}(\frac{Log(A)_{i,j}}{V_{i,j}}) V_{i,j}=0.\)
The proof is now complete. \(\square \)
6.3 Case 3
As in Case 2, A has exactly one conjugate pair of eigenvalues and hence its embeddability (and all its generators) can be determined by using Proposition 13 but defining the matrix V as \(V=P \; diag(0,0,0,0,2\pi i, 2\pi i) \;P^{1}\). However in Case 3 the conjugate pair of eigenvalues lie in \(A_1\) which is a Markov matrix. This allows us to use the results regarding the embeddability of \(3\times 3\) Markov matrices to obtain an alternative criterion to test the embeddability of A. To this end we define
where \(z:= \log _{1}(\lambda _1)\).
Proposition 14
The matrix A is embeddable if and only if Log(A) or \(Log_{1}(A)\) are rate matrices.
Proof
Note that \(\exp (Log(A))=\exp (Log_{1}(A))=A\) so one of the implications is immediate to prove. To prove the other implication, we assume that A is embeddable and let Q be a Markov generator for it. Proposition 1 yields that
for some integers \(k_1,\dots ,k_5 \in {\mathbb {Z}}\). Therefore, \(F(Q)=\begin{pmatrix} Q_1 &{} 0\\ 0 &{} Q_2\\ \end{pmatrix}\) where \(Q_1\) and \(Q_2\) are real logarithms of \(A_1\) and \(A_2\) respectively.
Since \(A_2\) is a real matrix with distinct positive eigenvalues, its only real logarithm is its principal logarithm. This implies that \(k_3=k_4=k_5=0\) (so that \(Q_2=Log(A_2)\)).
Now, recall that \(A_1\) is a Markov matrix (see Lemma 5). Using Proposition 1 again, we obtain that \(Q_1\) is a rate matrix, thus \(A_1\) is embeddable. To conclude the proof it is enough to recall Theorem 4 in James (1973), which yields that \(A_1\) is embeddable if and only if \(Log(A_1)\) or \(P_1\;diag(0,z,{\overline{z}}) \;P_1^{1}\) is a rate matrix. \(\square \)
6.4 Case 4
In this case, the solution to the embedding problem can be obtained as a byproduct of the results for the previous cases:
Proposition 15
Let \(Log_{0,0}(A)\) denote the principal logarithm of A and \(Log_{1,0}(A)\) denote the matrix in (6.2). Given the matrix \(V:= P \; diag(0,0,0,0,2\pi i, 2\pi i) \;P^{1}\) and \(k\in \{0,1\}\) define:
and set \( \ {\mathcal {N}}_k:= \{(i,j): i\ne j, \ V_{i,j}=0 \text { and } Log_{k,0}(A)_{i,j}<0\}.\) Then,

1.
A is embeddable if and only if \({\mathcal {N}}_k = \emptyset \) and \( {\mathcal {L}}_k \le {\mathcal {U}}_k\) for \(k=0\) or \(k=1\).

2.
If A is embeddable, then at least one of its Markov generator can be written as
$$\begin{aligned} Log_{k,k_2}(A):=P\; diag(0, \log _{k}(\lambda _1),\overline{\log _{k}(\lambda _1)}, \log (\mu ), \log _{k_2}(\gamma _1), \overline{\log _{k_2}(\gamma _1)}) \;P^{1} \end{aligned}$$with \(k\in \{0,1\}\) and \(k_2\in {\mathbb {Z}}\) such that \({\mathcal {L}}_{k} \le k_2 \le {\mathcal {U}}_{k}\).
Proof
The matrix A is embeddable if and only if it admits a Markov generator. According to Proposition 1, if such a generator Q exists then it can be written as \(Log_{k_1,k_2}(A)\) for some \(k_1,k_2 \in {\mathbb {Z}}\). Therefore, Lemma 5 implies that \(F(A)=\begin{pmatrix} A_1 &{} 0\\ 0 &{} A_2\\ \end{pmatrix}\) for some matrices \(A_1\) and \(A_2\). Moreover, \(F(Q)=\begin{pmatrix} Q_1 &{} 0\\ 0 &{} Q_2\\ \end{pmatrix}\) where \(Q_1\) and \(Q_2\) are real logarithms of \(A_1\) and \(A_2\) respectively.
As shown in the proof of Proposition 14, \(A_1\) is actually a Markov matrix and \(Q_1\) is a Markov generator for it (see also Lemma 5). Moreover, by Theorem 4 in James (1973), \(A_1\) is embeddable if and only if \(Log(A_1)\) or \(Log_{1}(A_1)\) are rate matrices. This implies that \(Log_{k_1,k_2}(A)\) is a rate matrix if and only if \(Log_{0,k_2}(A)\) or \(Log_{1,k_2}\) are rate matrices. To conclude the proof we proceed as in the proof of Proposition 13. Indeed, note that for \(k\in \{0,1\}\), \(Log_{k,k_2}(A) = Log_{k,0}(A) + k_2 V\). Using this, it is immediate to check that \(Log_{k,k_2}(A)\) is a rate matrix if and only if \({\mathcal {N}}_{k} = \emptyset \) and \( {\mathcal {L}}_{k} \le k_2\le {\mathcal {U}}_{k}\). \(\square \)
7 Discussion
The central symmetry is motivated by the complementarity between both strands of the DNA. When a nucleotide substitution occurs in one strand, there is also a substitution between the corresponding complementary nucleotides on the other strand. Therefore, working with centrosymmetric Markov matrices is the most general approach when considering both DNA strands.
In this paper, we have discussed the embedding problem for centrosymmetric Markov matrices. In Theorem 2, we have obtained a characterization of the embeddabilty of \(4\times 4\) centrosymmetric Markov matrices which are exactly the strand symmetric Markov matrices. In particular, we have also shown that if a \(4\times 4\) CS Markov matrix is embeddable, then any of its Markov generators is also a CS matrix. Furthermore, In Sect. 6, we have discussed the embeddability criteria for larger centrosymmetric matrices.
As a consequence of the characterization of Theorem 2, we have been able to compute and compare the volume of the embeddable \(4\times 4\) CS Markov matrices within some subspaces of \(4\times 4\) CS Markov matrices. These volume comparisons can be seen in Table 2 and Table 7. For larger matrices, using the results in Sect. 6, we have estimated the proportion of embeddable matrices within the set of all \(6\times 6\) centrosymmetric Markov matrices and within the subsets of DLC and DD matrices. This is summarized in Table 9 below. The computations were repeated several times obtaining results with small differences in the values but the same order of magnitude and starting digits.
As we have seen in Sect. 3 and 6, we have only considered in detail the embeddability of CS Markov matrices of size \(n=4\) and \(n=6\). We expect that the proportion of the embeddable CS Markov matrices within the subset of Markov matrices in larger dimension tends to zero as n grows larger as indicated by Tables 2, 7, 8, and 9.
These results together with the results obtained for the strand symmetric model (see Table 7) indicate that restricting to homogeneous Markov processes in continuoustime is a very strong restriction because nonembeddable matrices are discarded and their proportion is much larger than that of embeddable matrices. For instance, in the \(2\times 2\) case exactly \(50\%\) of the matrices are discarded (Ardiyansyah et al. 2021, Table 5), while in the case of \(4\times 4\) matrices up to \(98.26545\%\) of the matrices are discarded (see Table 7) and in the case of \(6\times 6\) matrices the amount of discarded matrices is about \(99.99863\%\) as indicated in Table 9. However, when restricting to subsets of Markov matrices which are mathematically more meaningful in biological terms, such as DD or DLC matrices, the proportion of embeddable matrices is much higher so that we are discarding less matrices (e.g. for DD we discard \(68.41679\%\) of \(4\times 4\) matrices and \(97.2441\%\) of \(6\times 6\) matrices). This is not to say that it makes no sense to use continuoustime models but to highlight that one should take the above restrictions into consideration when working with these models. Conversely, when working with the whole set of Markov matrices one has to be aware that they might end up considering lots of nonmeaningful matrices.
References
Aitken AC (2017) Determinants and matrices. Read Books Ltd
Ardiyansyah M, Kosta D, Kubjas K (2021) The modelspecific Markov embedding problem for symmetric groupbased models. J Math Biol 83(3):1–26
Baake M, Sumner J (2020) Notes on markov embedding. Linear Algebra Appl 594:262–299
Bain J, Switzer C, Chamberlin R, Benner SA (1992) Ribosomemediated incorporation of a nonstandard amino acid into a peptide through expansion of the genetic code. Nature 356(6369):537–539
Benner SA, Sismour AM (2005) Synthetic Biol. Nat Rev Genet 6(7):533–543
Cantoni A, Butler P (1976) Eigenvalues and eigenvectors of symmetric centrosymmetric matrices. Linear Algebra Appl 13(3):275–288
Carette P (1995) Characterizations of embeddable 3\(\times \)3 stochastic matrices with a negative eigenvalue. New York J Math 1:120–129
Casanellas M, Kedzierska AM (2013) Generating Markov evolutionary matrices for a given branch length. Linear Algebra Appl 438(5):2484–2499
Casanellas M, Sullivant S (2005) The strand symmetric model. In: Pachter L, Sturmfels B (eds) Algebraic statistics for computational biology. Cambridge University Press, New York
Casanellas M, FernándezSánchez J, RocaLacostena J (2020) Embeddability and rate identifiability of Kimura 2parameter matrices. J Math Biol 80(4):995–1019
Casanellas M, FernándezSánchez J, RocaLacostena J (2020b) An open set of 4\(\times \) 4 embeddable matrices whose principal logarithm is not a markov generator. Linear and Multilinear Algebra pp 1–12
Casanellas M, FernándezSánchez J, RocaLacostena J (2023) The embedding problem for Markov matrices. Publicacions Matemàtiques 67:411–445
Chang JT (1996) Full reconstruction of Markov models on evolutionary trees: identifiability and consistency. Math Biosci 137(1):51–73
Chen Y, Chen J (2011) On the imbedding problem for threestate time homogeneous Markov chains with coinciding negative eigenvalues. J Theor Probab 24:928–938
Culver WJ (1966) On the existence and uniqueness of the real logarithm of a matrix. Proceed Am Math Soc 17:1146–1151
Cuthbert JR (1972) On uniqueness of the logarithm for Markov semigroups. J Lond Math Soc 2(4):623–630
Davies E et al (2010) Embeddable Markov matrices. Electron J Probab 15:1474–1486
Elfving G (1937) Zur theorie der Markoffschen ketten. Acta Societatis Scientiarum FennicæNova Series A 2(8):17 pages
Fuglede B (1988) On the imbedding problem for stochastic and doubly stochastic matrices. Probab Theory Relat Fields 80:241–260
Gawrilow E, Joswig M (2000) Polymake: a framework for analyzing convex polytopes. In: Polytopescombinatorics and computation, Springer, pp 43–73
Goodman GS (1970) An intrinsic time for nonstationary finite Markov chains. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 16:165–180
Hammersley J (2013) Monte carlo methods. Springer Science & Business Media
Higham NJ (2008) Functions of matrices: Theory and computation, vol 104. SIAM
Hoshika S, Leal NA, Kim MJ, Kim MS, Karalkar NB, Kim HJ, Bates AM, Watkins NE, SantaLucia HA, Meyer AJ et al (2019) Hachimoji DNA and RNA: a genetic system with eight building blocks. Science 363(6429):884–887
Inc WR (2022) Mathematica, Version 13.1 https://www.wolfram.com/mathematica
Iosifescu M (2014) Finite Markov processes and their applications. Courier Corporation
James CR (1973) The logarithm function for finitestate Markov semigroups. J Lond Math Soc 2(3):524–532
Jia C (2016) A solution to the reversible embedding problem for finite markov chains. Statist Probab Lett 116:122–130
Johansen S (1974) Some results on the imbedding problem for finite Markov chains. J London Math Soc s28(2):345–351
Kimura M (1957) Some problems of stochastic processes in genetics. Ann Math Statistics pp 882–901
Kingman JFC (1962) The imbedding problem for finite Markov chains. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 1(1):14–24
Leal NA, Kim HJ, Hoshika S, Kim MJ, Carrigan MA, Benner SA (2015) Transcription, reverse transcription, and analysis of RNA containing artificial genetic components. ACS Synth Biol 4(4):407–413
Malyshev DA, Dhami K, Quach HT, Lavergne T, Ordoukhanian P, Torkamani A, Romesberg FE (2012) Efficient and sequenceindependent replication of DNA containing a third base pair establishes a functional sixletter genetic alphabet. Proceedings of the National Academy of Sciences 109(30):12,005–12,010
Pachter L, Sturmfels B (2005) Algebraic statistics for computational biology, vol 13. Cambridge University Press, UK
RocaLacostena J (2021) The embedding problem for Markov matrices. PhD thesis, Universitat Politècnica de Catalunya
RocaLacostena J, FernándezSánchez J (2018) Embeddability of Kimura 3ST Markov matrices. J Theor Biol 445:128–135
RocaLacostena J, FernándezSánchez J (2018) Embeddability of Kimura 3st markov matrices. J Theor Biol 445:128–135
Runnenberg JT (1962) On Elfving’s problem of imbedding a timediscrete markov chain in a timecontinuous one for finitely many states. Proceedings of the KNAW  Series A, Mathematical Sciences 65:536–541
Schensted IV (1958) Appendix model of subnuclear segregation in the macronucleus of ciliates. Am Nat 92(864):161–170
Sismour AM, Lutz S, Park JH, Lutz MJ, Boyer PL, Hughes SH, Benner SA (2004) Pcr amplification of DNA containing nonstandard base pairs by variants of reverse transcriptase from human immunodeficiency virus1. Nucleic Acids Res 32(2):728–735
Stein P (1966) A note on the volume of a simplex. Am Math Mon 73(3):299–301
Weaver JR (1985) Centrosymmetric (crosssymmetric) matrices, their basic properties, eigenvalues, and eigenvectors. Am Math Mon 92(10):711–717
Yang Z, Hutter D, Sheng P, Sismour A, Benner S (2006) Artificially expanded genetic information system: a new base pair with an alternative hydrogen bonding pattern. Nucleic Acids Res 34(21):6095–101
Yang Z, Sismour AM, Sheng P, Puskar NL, Benner SA (2007) Enzymatic incorporation of a third nucleobase pair. Nucleic Acids Res 35(13):4238–4249
Yang Z, Chen F, Alvarado JB, Benner SA (2011) Amplification, mutation, and sequencing of a sixletter synthetic genetic system. J Am Chem Soc 133(38):15,105–15,112
Yap VB, Pachter L (2004) Identification of evolutionary hotspots in the rodent genomes. Genome Res 14(4):574–579
Acknowledgements
Dimitra Kosta was partially supported by a Royal Society Dorothy Hodgkin Research Fellowship DHF\(\backslash \)R1\(\backslash \)201246. Jordi RocaLacostena was partially funded by Secretaria d’Universitats i Recerca de la Generalitat de Catalunya (AGAUR 2018FI_B_0094). Muhammad Ardiyansyah is partially supported by the Academy of Finland Grant No. 323416.
Funding
Open Access funding provided by Aalto University.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ardiyansyah, M., Kosta, D. & RocaLacostena, J. Embeddability of centrosymmetric matrices capturing the doublehelix structure in natural and synthetic DNA. J. Math. Biol. 86, 69 (2023). https://doi.org/10.1007/s00285023018958
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00285023018958