1 Introduction

Phylogenomics is a new field that applies tools from phylogenetics to genome datasets. The multi-species coalescent model is often used to model the distribution of gene trees under a given species tree (Maddison 2008). The first step in statistical analysis of phylogenomic data is to analyze sequence alignments to determine whether their evolutionary histories are congruent with each other. In this step, evolutionary biologists aim to identify genes with unusual evolutionary events, such as duplication, horizontal gene transfer, or hybridization (Ané et al. 2007). To accomplish this, they compare multiple sets of gene trees, that is, phylogenetic trees reconstructed from alignments of genes, with each gene tree characterised by the aforementioned evolutionary events. The classification of gene trees into different categories is therefore important for analyzing multi-locus phylogenetic data.

Tree classification can also help in assessing the convergence of Markov Chain Monte Carlo (MCMC) analyses for Bayesian inference on phylogenetic tree reconstruction. Often, we apply MCMC samplers to estimate the posterior distribution of a phylogenetic tree given an observed alignment. These samplers typically run multiple independent Markov chains on the same observed dataset. The goal is to check whether these chains converge to the same distribution. This process is often done by comparing summary statistics computed from sampled trees. These statistics often only depend on the tree topologies, and so they naturally lose information about the branch lengths of the sampled trees. Alternatively, we propose the use of a classification model that classifies trees from different chains and uses statistical measures such as the Area under the ROC Curve (AUC) to indicate how distinguishable the two chains are. Consequently, high values of AUCs indicate that the chains have not converged to the equilibrium distribution. Currently, there is no classification model over the space of phylogenetic trees, the set of all possible phylogenetic trees with a fixed number of leaves. In this paper, we propose a classifier that is appropriate for the tree space and is sensitive to branch lengths, unlike the summary statistics of most MCMC convergence diagnostic tools.

In Euclidean geometry, the logistic regression model is the simplest generalized linear model for classification. It is a supervised learning method that classifies data points by modeling the log-odds of having a response variable in a particular class as a linear combination of predictors. This model is very popular in statistical learning due to its simplicity, computational speed and interpretability. However, directly applying such classical supervised models to a set of sampled trees may be misleading, since the space of phylogenetic trees does not conform to Euclidean geometry.

The space of phylogenetic trees with labeled leaves [m] is a union of lower dimensional polyhedral cones with dimension \(m - 1\) over \(\mathbb {R}^e\) where \(e = \left( {\begin{array}{c}m\\ 2\end{array}}\right) \) (Speyer and Sturmfels 2009; Lin et al. 2017). This space is not Euclidean and even lacks convexity (Lin et al. 2017). In fact, Speyer and Sturmfels (2009) showed that the space of phylogenetic trees is a tropicalization of linear subspaces defined by a system of tropical linear equations (Page et al. 2020) and is therefore a tropical linear space.

Consequently, many researchers have applied tools from tropical geometry to statistical learning methods in phylogenomics, such as principal component analysis over the space of phylogenetic trees with a given set of leaves [m] (Page et al. 2020; Yoshida et al. 2019), kernel density estimation (Yoshida et al. 2022a), MCMC sampling (Yoshida et al. 2022b), and support vector machines (Yoshida et al. 2021). Recently, Akian et al. (2021) proposed a tropical linear regression over the tropical projective space as the best-fit tropical hyperplane. However, our logistic regression model is built from first principles and is not a trivial extension of the aforementioned tropical regression model.

In this paper, an analog of the logistic regression is developed over the tropical projective space, which is the quotient space \(\mathbb {R}^e/\mathbb {R}{} \textbf{1}\) where \(\textbf{1}:= (1, 1, \ldots , 1)\). Given a sample of observations within this space, the proposed model finds the “best-fit” tree representative \(\omega _Y \in \mathbb {R}^e/\mathbb {R}{} \textbf{1}\) of each class \(Y \in \{0,1\}\) and the “best-fit” deviation of the gene trees. This tree representative is a statistical parameter and can be interpreted as the corresponding species tree of the gene trees. The deviation parameter is defined in terms of the variability of branch lengths of gene trees. It is established that the median tree, specifically the Fermat–Weber point, can asymptotically approximate the inferred tree representative of each class. The response variable \(Y \in \{0,1\}\) has conditional distribution \(Y|X \sim \textrm{Bernoulli}( S(h(X)))\), where h(x) is small when x is close to \(\omega _0\) and far away from \(\omega _{1}\) and vice versa.

In Sect. 1 an overview of tropical geometry and its connections to phylogenetics is presented. The one-species and two-species tropical logistic models are developed in Sect. 2. Theoretical results, including the optimality of the proposed method over tropically distributed predictor trees, the distance distribution of those trees from their representative, the consistency of estimators and the generalization error of each model are stated in Sect. 2 and proved in “Appendix A”. Section 3 explains the benefit and suitability of using the Fermat–Weber point approximation for the inferred trees and a sufficient optimality condition is stated. Computational results are presented in Sect. 4 where a toy example is considered for illustration purposes. Additionally, a comparison study between classical, tropical and BHV logistic regression is conducted on data generated under the coalescent model. In both the toy example and the coalescent gene trees example, our model outperforms the alternative regression models. Finally, our model is proposed as an alternative MCMC convergence criterion in Sect. 4.3. The paper concludes with a discussion in Sect. 5. The code developed and implemented for the proposed model can be found in Aliatimis (2024).

The dataset can be found at DRYAD with DOI: 10.5061/dryad.tht76hf65.

2 Tropical Geometry and Phylogenetic Trees

2.1 Tropical Basics

This section covers the basics of tropical geometry and provides the theoretical background for the model developed in later sections. The concept of a tropical metric will be used when defining a suitable distribution for the gene trees. For more details regarding the basic concepts of tropical geometry covered in this section, readers are recommended to consult Maclagan and Sturmfels (2015).

A key tool from tropical geometry is the tropical metric also known as the tropical distance defined as follows:

Definition 1

(Tropical distance) The tropical distance, more formally known as the Generalized Hilbert projective metric, between two vectors \(v, \, w \in ({\mathbb {R}}\cup \{-\infty \})^e\) is defined as

$$\begin{aligned} d_\textrm{tr}(v,w) := \Vert v-w \Vert _\textrm{tr} = \max _{i} \bigl \{ v_i - w_i \bigr \} - \min _{i} \bigl \{ v_i - w_i \bigr \}, \end{aligned}$$
(1)

where \(v = (v_1, \ldots , v_e)\) and \(w= (w_1, \ldots , w_e)\).

Remark 1

Consider two vectors \(v=(c,\dots ,c) = c \textbf{1} \in {\mathbb {R}}^e\) and \(w=\textbf{0} \in {\mathbb {R}} ^ e\). It is easy to verify that \( d_\textrm{tr}(v,w) = 0 \) and as a result \(d_\textrm{tr}\) is not a metric in \({\mathbb {R}} ^ e\). The space in which \(d_\textrm{tr}\) is a metric treats all points in \( \{ c \textbf{1}: c \in {\mathbb {R}} \} = \mathbb {R} \textbf{1}\) as the same point. The quotient space \( ( {\mathbb {R}} \cup \{-\infty \} ) ^ e / {\mathbb {R}} \textrm{1} \) achieves just that.

Proposition 1

The function \(d_\textrm{tr}\) is a well-defined metric on \( ({\mathbb {R}}\cup \{-\infty \})^e \!/{\mathbb {R}} \textbf{1}, \) where \(\textbf{1} \in {\mathbb {R}} ^ e\) is the vector of all-ones.

2.2 Equidistant Trees and Ultrametrics

Phylogenetic trees depict the evolutionary relationship between different taxa. For example, they may summarise the evolutionary history of certain species. The leaves of the tree correspond to the species studied, while internal nodes represent (often hypothetical) common ancestors of those species and their ancestors. In this paper, only rooted phylogenetic trees are considered, with the common ancestor of all taxa based on the root of the tree. The branch lengths of these trees are measured in evolutionary units, i.e. the amount of evolutionary change. Under the molecular clock hypothesis, the rate of genetic change between species is constant over time, which implies genetic equidistance and allows us to treat evolutionary units as proportional to time units. Consequently, phylogenetic trees of extant species are equidistant trees.

Definition 2

(Equidistant tree) Let T be a rooted phylogenetic tree with leaf label set [m], where \(m \in \mathbb {N}\) is the number of leaves. If the distance from all leaves \(i \in [m]\) to the root is the same, then T is an equidistant tree.

It is noted that the molecular clock hypothesis has limitations and the rate of genetic change can in fact vary from one species to another. However, the assumption that gene trees are equidistant is not unusual in phylogenomics; the multispecies coalescent model makes that assumption in order to conduct inference on the species tree from a sample of gene trees (Maddison and Maddison 2009). The proposed classification method is not restricted to equidistant trees, but all coalescent model gene trees produced in Sect. 4.2. are equidistant.

To conduct any mathematical analysis, a vector representation of trees is needed. A common way is to use BHV coordinates (Billera et al. 2001) but in this paper distance matrices are used instead, which are then transformed into vectors. The main reason is simplicity and computational efficiency; it is much easier to compute gradients in the tropical projective torus than in the BHV space.

Definition 3

(Distance matrix) Consider a phylogenetic tree T with leaf label set [m]. Its distance matrix \(D \in \mathbb {R}^{m \times m}\) has components \(D_{ij}\) being the pairwise distance between a leaf \(i \in [m]\) to a leaf \(j \in [m]\). It follows that the matrix is symmetric with zeros on its diagonals. For equidistant trees, \(D_{ij}\) is equal to twice the difference between the current time and the latest time that the common ancestor of i and j was alive.

To form a vector, the distance matrix D is mapped onto \(\mathbb {R}^e\) by vectorizing the strictly upper triangular part of D, i.e.

$$\begin{aligned} D \mapsto (D_{12}, \dots , D_{1m}, D_{23}, \dots , D_{2m}, \dots , D_{(m-1)m}) \in \mathbb {R}^e, \end{aligned}$$

where the dimension of the resulting vector is equal to the number of all possible pairwise combinations of leaves in T. Hence the dimension of the phylogenetic tree space is \(e = \left( {\begin{array}{c}m\\ 2\end{array}}\right) \). In what follows, the connection between the space of phylogenetic trees and tropical linear spaces is established.

Definition 4

(Ultrametric) Consider the distance matrix \(D \in \mathbb {R}^{m \times m}\). Then if

$$\begin{aligned} \max \{D_{ij}, D_{jk}, D_{ik}\} \end{aligned}$$

is attained at least twice for any \(i,j,k \in [m]\), D is an ultrametric. Note that the distance map \(d(i,j) = D_{ij}\) forms a metric in [m], with the strong triangular inequality satisfied. The space of ultrametrics is denoted as \(\mathcal {U}_m\).

Theorem 1

(noted in Buneman (1974)) Suppose we have an equidistant tree T with a leaf label set [m] and D as its distance matrix. Then, D is an ultrametric if and only if T is an equidistant tree.

Using Theorem 1, if we wish to consider all possible equidistant trees, then it is equivalent to consider the space of ultrametrics as the space of phylogenetic trees on [m]. Here we define \(\mathcal {U}_m\) as the space of ultrametrics with a set of leaf labels [m]. Theorem 5 (explained in Ardila and Klivans 2006; Page et al. 2020) in “Appendix B” establishes the connection between phylogenetic trees and tropical geometry by stating that the ultrametric space is a tropical linear space.

3 Method

Our logistic regression model is designed to capture the association between a binary response variable \(Y\in \{0,1\}\) and an explanatory variable vector \(X\in \mathbb {R}^n\), where n is the number of covariates in the model. Under the logistic model, \(Y \sim \text {Bernoulli}(p(x|\omega ))\) where

$$\begin{aligned} p(x|\omega ) = \mathbb {P}(Y=1|x) = \frac{1}{1+\exp {(-h_{\omega }(x)})} = \sigma \left( h_{\omega }(x) \right) , \end{aligned}$$
(2)

where \(\sigma \) is the logistic function and \(\omega \in \mathbb {R}^n\) is the model parameter that needs to be estimated and h is a function that will be specified later. The log-likelihood function of logistic regression for N observation pairs \((x^{(1)},y^{(1)}), \dots , (x^{(N)},y^{(N)})\) is

$$\begin{aligned} l(\omega |x,y) = \frac{1}{N} \sum _{i=1}^N y^{(i)} \log p_{\omega }^{(i)} + (1-y^{(i)}) \log (1-p_{\omega }^{(i)}), \end{aligned}$$
(3)

where \(p_{\omega }^{(i)} = p(x^{(i)}|\omega )\). It is the negative of the cross entropy loss. The training model seeks a statistical estimator \(\hat{\omega }\) that maximizes this function.

3.1 Optimal Model

The framework described thus far incorporates the tropical, classical and BHV logistic regression as special cases. In this section, we show that these can be distinguished through the choice of the function h. In fact, this function h can be derived from the conditional distributions X|Y, as stated in Eq. (4) of Lemma 1, below, by simple application of the Bayes’ rule.

If X|Y is a Gaussian distribution with appropriate parameters, the resulting model is the classical logistic regression. Alternatively, if X|Y is a “tropical” distribution, then the resulting classification model is the “tropical” logistic regression. Examples 1 and 2 illustrate this for non-tropical and tropical distributions respectively, and Remark 2 discusses the choice of tropical distribution in more detail.

Furthermore, the function h from (4) also minimizes the expected cross-entropy loss according to Proposition 2. Therefore, the best model to fit data that have been generated by tropical Laplace distribution (6) is the tropical logistic regression. We conclude this section showing how the tropical metric and tropical Laplace distribution may be applied to produce two intuitive variants of tropical logistic regression, our one- and two-species models.

Lemma 1

Let \(Y \sim \textrm{Bernoulli}(r)\) and define the random vector \(X \in \mathbb {R}^n\) with conditional distribution \(X|Y \sim f_Y\), where \(f_0, f_1\) are probability density functions defined in \(\mathbb {R}^n\). Then, \(Y | X \sim \textrm{Bernoulli}(p(X))\) with \(p(x) = \sigma (h(x))\), where

$$\begin{aligned} h(x) = \log \left( \frac{r f_1(x)}{(1-r)f_0(x)} \right) . \end{aligned}$$
(4)

Proposition 2

Let \(Y \sim \text {Bernoulli}(r)\) and define the random vector \(X \in \mathbb {R}^n\) with conditional distribution \(X|Y \sim f_Y\), where \(f_0, f_1\) are probability density functions defined in \(\mathbb {R}^n\). The functional p that maximises the expected log-likelihood as given by equation (3) is \(p(x) = \sigma (h(x))\), with h defined as in equation (4) of Lemma 1.

Example 1

(Normal distribution and classical logistic regression) Suppose that the two classes are equiprobable (\(r=1/2\)) and that the covariate is multivariate normal

$$\begin{aligned} X|Y \sim \mathcal {N}(\omega _Y,\sigma ^2 I_n), \end{aligned}$$

where n is covariate dimension and \(I_n\) is the identity matrix. Using Lemma 1, the optimal model has

$$\begin{aligned} h(x) = - \frac{\Vert x-\omega _1\Vert ^2}{2\sigma ^2} + \frac{ \Vert x-\omega _0\Vert ^2}{2\sigma ^2} = \frac{(\omega _1-\omega _0)^T}{\sigma ^2} (x-\bar{\omega }), \end{aligned}$$
(5)

where \(\Vert \cdot \Vert \) is the Euclidean norm and \(\bar{\omega }=(\omega _0+\omega _1)/2\). This model is the classical logistic regression model with translated covariate \(X-\bar{\omega }\) and \(\omega = \sigma ^{-2} (\omega _1- \omega _0)\).

Example 2

(Tropical Laplace distribution) It may be assumed that the covariates are distributed according to the tropical version of the Laplace distribution, as presented in Yoshida et al. (2022b), with mean \(\omega _Y\) and probability density functions

$$\begin{aligned} f_Y(x) = \frac{1}{\Lambda } \exp \left( - \frac{d_\textrm{tr}(x,\omega _Y)}{\sigma _Y} \right) , \end{aligned}$$
(6)

where \(\Lambda \) is the normalizing constant of the distribution.

Proposition 3

In distribution (6), the normalizing factor is \(\Lambda = e! \sigma _Y^{e-1}\).

Proof

See “Appendix A”. \(\square \)

Remark 2

Consider \(\mu \in \mathbb {R}^d\) and a covariance matrix \(\Sigma \in \mathbb {R}^{d \times d}\). Then the pdf of a classical Gaussian distribution is

$$\begin{aligned} f_{\mu , \Sigma }(x) \propto \exp \left( -\frac{1}{2}(x-\mu )^t \Sigma ^{-1}(x-\mu )\right) \end{aligned}$$
(7)

where \(x \in \mathbb {R}^d\) and \(y^t\) is the transpose of a vector \(y \in \mathbb {R}^d\). When \(\sigma _Y = 1\), the tropical Laplacian distribution in (6) is tropicalization of the left hand side in (7) where \(\Sigma \) is to the tropical identity matrix

$$\begin{aligned} \left( \begin{array}{ccccc} 0 &{}-\infty &{}-\infty &{} \ldots &{}-\infty \\ -\infty &{} 0 &{}-\infty &{} \ldots &{}-\infty \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{}\vdots \\ -\infty &{} -\infty &{}-\infty &{} \ldots &{}0 \\ \end{array} \right) . \end{aligned}$$

Tran (2020) nicely surveys the many different definitions of tropical Gaussian distributions. Since the space of ultrametrics is a tropical linear space (Speyer and Sturmfels 2009), it is natural to use tropical “linear algebra” for the definition of tropical “Gaussian” distribution defined in (6) in this research. Clearly not all desirable properties of the classical Gaussian distribution are necessarily realised in a tropical space.

For example, as Tran discussed in Tran (2020), we lose some natural intuition of orthogonality of vectors. This means that we lose a nice geometric intuition of a correlation between two random vectors. Even with the loss of some nice properties of the classical Gaussian distribution, the tropical Laplacian (7) is a popular choice. It has been applied to statistical analysis of phylogenetic trees: as a kernel density estimator of phylogenetic trees over the space of phylogenetic trees (Yoshida et al. 2022a), and as the Bayes estimator (Huggins et al. 2011) because this distribution is interpretable in terms of phylogenetic trees.

In particular, the tropical metric \(d_\textrm{tr}\) represents the biggest difference of divergences (speciation time and mutation rates) between two species among two trees shown in Example 3. This is a very natural and desirable interpretation in terms of phylogenomics. The smaller difference of divergences between two species among the tree with an observed ultrametric x and the tree with the centriod has higher probability. Therefore, it is natural to apply a sample generated from the multi-species coalescent model where the species tree has the centroid as its dissimilarity map. It is worth noting that we do not know much about a well-defined distribution over the space of phylogenetic trees, despite many researchers’ attempts (Garba et al. 2021).

Fig. 1
figure 1

Example for an interpretation of the tropical metric \(d_\textrm{tr}\) in Example 3

Example 3

(Tropical Metric) Suppose we have equidistant trees \(T_1\) and \(T_2\) with leaf labels \(\{A, B, C, D\}\) shown in Fig. 1. Note that leaves A and C in \(T_1\) and \(T_2\) are switched. Thus, the pairwise distances from A and D in \(T_1\) and \(T_2\), as well as he pairwise distances from C and D in \(T_1\) and \(T_2\) are the largest and second largest differences among all possible pairwise distances.

Let u be a dissimilarity may from \(T_1\) and v be a dissimilarity map from \(T_2\):

$$\begin{aligned} \begin{array}{ccc} u &{}=&{}(2, 2, 2, 1.4, 1.4, 1) \\ v&{}=&{}(1.6,2,0.6,2,1.6,2). \end{array} \end{aligned}$$

Then we have

$$\begin{aligned} u - v = (2-1.6, 2-2, 2-0.6, 1.4-2, 1.4-1, 1-2) = (0.4, 0,1.4,-0.6,0.4,-1). \end{aligned}$$

Therefore

$$\begin{aligned} d_\textrm{tr}(u, v) = (u - v)_{A, D} - (u - v)_{C, D} \end{aligned}$$

which means the tropical metric measures the difference of divergence between A and D and difference of divergence between C and D.

Combining the result of Proposition 3 with Eqs. (4) and (6) yields

$$\begin{aligned} h_{\omega _0,\omega _1}(x) = \frac{d_\textrm{tr}(x,\omega _0)}{\sigma _0} - \frac{d_\textrm{tr}(x,\omega _1)}{\sigma _1} + (e-1) \log \left( \frac{\sigma _0}{\sigma _1}\right) . \end{aligned}$$
(8)

In its most general form, the model parameters are \((\omega _0,\omega _1,\sigma _0,\sigma _1)\) so the parameter space is a subset of \((\mathbb {R}^e/\mathbb {R}{} \textbf{1})^2 \times \mathbb {R}^2_+\) with dimension 2e. Two instances of this general model are particularly practically useful and interpretable. We call these the one-species and two-species models and they will be our focus for tropical logistic regression in the rest of the paper.

For the one-species model, it is assumed that \(\omega _0 =\omega _1\) and \(\sigma _0 \ne \sigma _1\). If, without loss of generality, \(\sigma _1>\sigma _0\), Eq. (8) becomes

$$\begin{aligned} h_{\omega }(x) = \lambda \left( d_\textrm{tr}(x,\omega ) - c \right) , \end{aligned}$$
(9)

where \(\lambda = (\sigma _0^{-1} - \sigma _1^{-1})\) and \(\lambda c = \log {\left( \sigma _1/\sigma _0\right) }\). Symbolically, the expression in Eq. (9) can be considered to be a scaled tropical inner product, whose direct analogue in classical logistic regression is the classical inner product \(h_{\omega }(x) = \omega ^T x\). See Section C in the “Appendix” for more details. The classifier is \(C(x) = \mathbb {I}(d_\textrm{tr}(x,\hat{\omega } ) > c) \), where \(\hat{\omega }\) is the inferred estimator of \(\omega ^*\). Note that the classification threshold and the probability contours (p(x)) are tropical circles, illustrated in Fig. 2.

For the two-species-tree model, it is assumed that \(\sigma _0 = \sigma _1\), and \(\omega _0\ne \omega _1\). Equation (8) reduces to

$$\begin{aligned} h_{\omega _0,\omega _1}(x) = \sigma ^{-1}( d_\textrm{tr}(x,\omega _0) - d_\textrm{tr}(x,\omega _1) ), \end{aligned}$$
(10)

with a classifier \( C(x) = \mathbb {I}( d_\textrm{tr}(x,\hat{\omega }_0) > d_\textrm{tr}(x,\hat{\omega }_1) ), \) where \(\hat{\omega }_y\) is the inferred tree for class \(y \in \{0,1\}\). The classification boundary is the tropical bisector which is extensively studied in Criado et al. (2021) between the estimators \(\hat{\omega }_0\) and \(\hat{\omega }_1\) and the probability contours are tropical hyperbolae with \(\hat{\omega }_0\) and \(\hat{\omega }_1\) as foci, as shown in Fig. 4(right).

The one-species model is appropriate when the gene trees of both classes are concentrated around the same species tree \(\omega \) with potentially different concentration rates. When the gene trees of each class come from distributions centered at different species trees the two-species model is preferred.

3.2 Model selection

In the previous subsection, we established the correspondence between the covariate conditional distribution and the function h which defines the logistic regression model. According to Proposition 2, the best regression model follows from the distribution that fits the data. The family of distributions that best fits the training data of a given class can indicate which regression model to use. The question that naturally arises is how to assess which family of conditional distributions has the best fit.

One issue is that the random covariates are multivariate and so the Kolmogorov-Smirnov test can not be readily applied. Moreover, the four families considered, namely the classical and tropical Laplace and Gaussian distributions, are not nested. Nonetheless, it is observed that for all these families the distances of the covariates from their centres are Gamma distributed. This is stated in Corollary 1 which is based on Proposition 4. Note that the distance metric corresponds to the geometry of the covariates. However, the arguments used in the proof of Corollary 1 do not work for distributions defined on the space of ultrametric trees \(\mathcal {U}_m\), because these spaces are not translation invariant. For a similar reason, the corollary does not apply to the BHV metric.

Proposition 4

Consider a function \(d:\mathbb {R}^n \rightarrow \mathbb {R}\) with \(\alpha d(x) = d(\alpha x)\), for all \(\alpha \ge 0\). If \(X \sim f\) with \(f(x) \propto \exp (-d^i(x)/(i\sigma ^i))\) being a valid probability density function, for some \(i \in \mathbb {N},\) \( \sigma >0\). Then, \(d^i(X) \sim i \sigma ^i \textrm{Gamma}(n/i)\).

Corollary 1

If \(X \in \mathbb {R}^e\) with \(X\sim f \propto \exp {(-d^i(x,\omega ^*)/(i\sigma ^{i}))}\), where d is the Euclidean metric, then \(d^i(X,\omega ^*) \sim i \sigma ^i \textrm{Gamma}(e/i)\). If \(X \in \mathbb {R}^e/\mathbb {R}{} \textbf{1}\) with \(X\sim f \propto \exp {(-d_\textrm{tr}^i(x,\omega ^*)/(i\sigma ^{i}))}\), where \(d_\textrm{tr}\) is the tropical metric, then \(d_\textrm{tr}^i(X,\omega ^*) \sim i \sigma ^i \textrm{Gamma}((e-1)/i)\).

The suitability of the tropical against the classical logistic regression is assessed for the coalescent model and the Mr Bayes trees, by visually comparing the fits of the theoretical Gamma distributions to Euclidean and tropical distances of the gene trees to the species tree.

3.3 Consistency and Generalization Error

In this subsection, the consistency of the statistical estimators (in Theorem 2) and of the tropical logistic regression as a learning algorithm (in Propositions 5 and 6) are established. Finally, the generalization error (probability of misclassification for unseen data) for the one-species model is derived and an upper bound is found for the generalization error of the two-species model. In both cases the error bounds are getting better as the estimation error \(\epsilon \) shrinks to zero. It is worth mentioning that in the case of exact estimation, the generalization error of the one-species model can be computed explicitly by Eq. (11). Moreover, there is a higher misclassification rate from the more dispersed class (inequality (12)).

Theorem 2

(Consistency) The estimator \((\hat{\omega },\hat{\sigma }) = (\hat{\omega }_0, \hat{\omega }_{1}, \hat{\sigma }_0,\hat{\sigma }_1) \in \Omega ^2 \times \Sigma ^2\) of the parameter \( (\omega ^*,\sigma ^*) = (\omega _0^*,\omega _1^*,\sigma _0^*,\sigma _1^*) \in \Omega ^2 \times \Sigma ^2 \) is defined as the maximizer of the logistic likelihood function, where \(\Omega \subset \mathbb {R}^e/\mathbb {R}{} \textbf{1}\) and \(\Sigma \subset \mathbb {R}_+\) are compact sets. Moreover, it is assumed that the covariate-response pairs \((X_1,Y_1), (X_2,Y_2), \dots , (X_n,Y_n)\) are independent and identically distributed with \(X_i \in \mathbb {R}^e/\mathbb {R}{} \textbf{1}\), \(d_\textrm{tr}(X,\omega _Y)\) being integrable and square-integrable and \(Y_i \sim \text {Bernoulli}( S(h(X_i,(\omega ^*,\sigma ^*) )))\). Then,

$$\begin{aligned} (\hat{\omega },\hat{\sigma }) \overset{p}{\rightarrow }\ (\omega ^*,\sigma ^*) \text { as } n \rightarrow \infty . \end{aligned}$$

In other words, the model parameter estimator is consistent.

Proposition 5

(One-species generalization error) Consider the one-species model where \(\omega =\omega _0=\omega _1 \in \mathbb {R}^e/\mathbb {R}{} \textbf{1}\) and without loss of generality \(\sigma _0 < \sigma _1\). The classifier is \(C(x)=\mathbb {I}(h_{\hat{\omega }}(x) \ge 0)\), where h is defined in equation (9) and \(\hat{\omega }\) is the estimate for \(\omega ^{\star }\). Define the covariate-response joint random variable (XY) with \(Z = \sigma _Y^{-1} d_\textrm{tr}(X,\omega _Y^*)\) drawn from the same distribution with cumulative density function F. Then,

$$\begin{aligned} \mathbb {P}(C(X) = 1|Y=0)&\in \left[ 1-F(\sigma _1 \left( \alpha + \epsilon \right) ), 1-F(\sigma _1 \left( \alpha - \epsilon \right) ) \right] , \\ \mathbb {P}(C(X) = 0|Y=1)&\in \left[ F(\sigma _0 \left( \alpha - \epsilon \right) ), F(\sigma _0 \left( \alpha + \epsilon \right) ) \right] , \text { where } \\ \alpha = \frac{\log {\frac{\sigma _1}{\sigma _0}}}{\sigma _1 - \sigma _0},&\,\, \text { and } \,\, \epsilon = (e-1) \frac{d_\textrm{tr}( \hat{\omega }, \omega ^* ) }{\sigma _1\sigma _0}. \end{aligned}$$

The generalization error defined as \(\mathbb {P}(C(X) \ne Y)\) lies in the average of the two intervals above. In particular, note that if \(\hat{\omega } = \omega ^*\), then \(\epsilon =0\) and the intervals shrink to a single point, so the misclassification probabilities and generalization error can be computed explicitly.

$$\begin{aligned} \mathbb {P}(C(X) \ne Y) = \frac{1}{2} \left( 1-F(\sigma _1 \alpha ) + F(\sigma _0 \alpha ) \right) \end{aligned}$$
(11)

Moreover, if \(\hat{\omega } = \omega _*\) and \(Z \sim \textrm{Gamma}(e-1,1)\), then

$$\begin{aligned} \mathbb {P}(C(X)=1 | Y = 0) < \mathbb {P}(C(X)=0 | Y = 1). \end{aligned}$$
(12)

Proposition 6

(Two-species generalization error) Consider the random vector \(X \in \mathbb {R}^e/\mathbb {R}{} \textbf{1}\) with response \(Y \in \{0,1\}\) and the random variable \(Z = d_\textrm{tr}( X, \omega ^*_Y )\). Assuming that the probability density function is \(f_X(x) \propto f_Z(d_\textrm{tr}( x, \omega ^*_Y ))\), the generalization error satisfies the following upper bound

$$\begin{aligned} \mathbb {P}\left( C(X) \ne Y \right) \le \frac{1}{2} F^C_{Z}(\Delta _{\epsilon }) + h(\epsilon ) , \end{aligned}$$
(13)

where \(\epsilon = d_\textrm{tr}(\hat{\omega }_1, \omega ^*_1 )+d_\textrm{tr}(\hat{\omega }_{0}, \omega ^*_{0})\), \(2\Delta _\epsilon = \left( d_\textrm{tr}(\omega _1^*, \omega _{0}^*) - \epsilon \right) \), \(F^C_Z\) is the complementary cumulative distribution of Z, and \(h(\epsilon )\) is an increasing function of \(\epsilon \) with \(2\,h(\epsilon ) \le F^C_Z(\Delta _{\epsilon })\) and \(h(0)=0\) assuming that \(\mathbb {P}(d_\textrm{tr}(X,\omega _1^*)) = d_\textrm{tr}(X,\omega _{-1}^*) )=0\).

Moreover, under the conditions of Theorem 2, our proposed learning algorithm is consistent.

Observe that the upper bound is a strictly increasing function of \(\epsilon \).

Example 4

The complementary cumulative distribution of \(\textrm{Gamma}(n,\sigma )\) is \(F^C(x) = \Gamma (n,x/\sigma )/\Gamma (n,0)\), where \(\Gamma \) is the upper incomplete gamma function and \(\Gamma (n,0)=\Gamma (n)\) is the regular Gamma function. Therefore, the tropical distribution given in Eq. (6) yields the following upper bound for the generalization error

$$\begin{aligned} \frac{\Gamma \left( e-1, \frac{d_\textrm{tr}(\omega _0^*,\omega _1^*)}{2\sigma } \right) }{2\Gamma (e-1)}, \end{aligned}$$
(14)

under the assumptions of Proposition 6 and assuming that the estimators coincide with the theoretical parameters. This assumption is reasonable for large sample sizes and it follows from Theorem 2.

In subsequent sections, these theoretical results will guide us in implementing our model. Bounds on the generalization error from Propositions 5 and 6 are computed and the suitability of Euclidean and tropical distributions, and as a result of classical and tropical logistic regards, is assessed using the distance distribution of Proposition 4.

4 Optimization

As in the classical logistic regression, the parameter vectors \(({\hat{\omega }}, {\hat{\sigma }})\) maximising the log-likelihood (3), are chosen as statistical estimators. Identifying these requires the implementation of a continuous optimization routine. While root-finding algorithms typically work well for identifying maximum likelihood estimators in the classical logistic regression where the log-likelihood is concave, they are unsuitable here. The gradients of the log-likelihood under the proposed tropical logistic models are only piecewise continuous, with the number of discontinuities increasing along with the sample size. Furthermore, even if a parameter is found, it may merely be a local optimum. In light of this, the tropical Fermat–Weber problem of Lin and Yoshida (2018) is revisited.

4.1 Fermat–Weber Point

A Fermat–Weber point or geometric mean \(\tilde{\omega }_n\) of the sample set \((X_1,\dots ,X_n)\) is a point that minimizes the sum of distances from to sample points, i.e.

$$\begin{aligned} \tilde{\omega }_n \in \mathop {{{\,\mathrm{arg\,min}\,}}}\limits _{\omega } \sum _{i=1}^n d_{\textrm{tr}}(X_i,\omega ). \end{aligned}$$
(15)

This point is rarely unique for finite n, indeed there will often be an infinite set of Fermat–Weber points Lin and Yoshida (2018). However, the proposition below gives conditions for asymptotic convergence.

Proposition 7

Let \(X_i \overset{\textrm{iid}}{\sim }\ f\), where where f is a distribution that is symmetric around its center \(\omega ^* \in \mathbb {R}^{e}/\mathbb {R}{} \textbf{1}\) i.e. \(f(\omega ^* + \delta ) = f(\omega ^* - \delta )\) for all \(\delta \in \mathbb {R}^{e}/\mathbb {R}{} \textbf{1}\). Let \(\tilde{\omega }_n\) be any Fermat–Weber point as defined in Eq. (15). Then, \(\tilde{\omega }_n \overset{p}{\rightarrow }\ \omega ^*\) as \(n \rightarrow \infty \).

The significance of Proposition 7 is twofold. It proves that the Fermat–Weber sets of points sampled from symmetric distributions tend to a unique point. This is a novel result and ensures that for sufficiently large sample sizes the topology of any Fermat–Weber point is fixed. Additionally, using Theorem 2 and Proposition 7, \(\hat{\omega }_n - \tilde{\omega }_n \overset{p}{\rightarrow }\ 0\) as \(n \rightarrow \infty \). Furthermore, empirical evidence in Fig. 5, see the following section, suggests that \(d_\textrm{tr}(\hat{\omega }_n, \omega ^*) = \mathcal {O}_p(1/\sqrt{n})\) and \(d_\textrm{tr}(\tilde{\omega }_n, \omega ^*) = \mathcal {O}_p(1/\sqrt{n})\). These statements are left as conjectures and proofs of them are beyond the scope of this paper. Assuming they hold and applying triangular inequality, it follows that \( d_\textrm{tr}(\hat{\omega }_n, \tilde{\omega }_n)= \mathcal {O}_p(1/\sqrt{n}).\) As a result, for a sufficiently large sample size we may use the Fermat–Weber point as an approximation for the MLE vector. Indeed, there are benefits in doing so.

Instead of having a single optimization problem with \(2e-1\) variables, three simpler problems are considered; finding the Fermat–Weber point of each of the two classes, which has \(e-1\) degrees of freedom and then finding the optimal \(\sigma \) which is a one dimensional root finding problem. The algorithms of our implementation for both model can be found in “Appendix D”.

There is also another another benefit of using Fermat–Weber points. Proposition 8 provides a sufficient optimality condition that the MLE lacks, since a vanishing gradient in the log likelihood function merely shows that there is a local optimum.

Proposition 8

Let \(X_1,\dots ,X_n \in \mathbb {R}^e/\mathbb {R}{} \textbf{1}\), \(\omega \in \mathbb {R}^e/\mathbb {R}{} \textbf{1}\) and define the function

$$\begin{aligned} f(\omega ) = \sum _{i=1}^{n} d_\textrm{tr}(X_i,\omega ). \end{aligned}$$
  1. i.

    The gradient vector of f is defined at \(\omega \) if and only if the vectors \(\omega - X_i\) have unique maximum and minimum components for all \(i \in [n]\).

  2. ii.

    If the gradient of f at \(\omega \) is well-defined and zero, then \(\omega \) is a Fermat–Weber point.

In Lin and Yoshida (2018), Fermat–Weber points are computed by means of linear programming, which is computationally expensive. Employing a gradient-based method is much faster, but there is no guarantee of convergence. Nevertheless, if the gradient, which is an integer vector, vanishes, then it is guaranteed, as above, that the algorithm has reached a Fermat–Weber point. This tends to happen rather frequently, but not in all cases examined in Sect. 4.

Remark 3

Our choice of Fermat–Weber points to represent centers is not the only practical option, however it is an especially desirable choice due to the interpretability of its resulting solutions.

Recently, Comǎneci and Joswig studied tropical Fermat–Weber points obtained using the asymmetric tropical distance (Comǎneci and Joswig 2023). They found that if all \(X_i\) are ultrametric, then the resulting tropical Fermat–Weber points are also ultrametric, all with the same tree topology. On the other hand, Lin et al. (2017) show that a tropical Fermat–Weber point defined with \(d_\textrm{tr}\) of a sample taken from the space of ultrametrics could fall outside of the ultrametric space.

Despite this, the major drawback of using the asymmetric tropical distance, is that it would result in losing the phylogenetic interpretation of the distance or dissimilarity between two trees held by the tropical metric \(d_\textrm{tr}\)—see Remark 2.

5 Results

In this section, tropical logistic regression is applied in three different scenarios. The first and simplest considers datapoints generated from the tropical Laplace distribution. Secondly, gene trees sampled from a coalescent model are classified based on the species tree they have been generated from, and finally it is applied as an MCMC convergence criterion for the phylogenetic tree construction, using output from the Mr Bayes software. The models’ performance in terms of misclassification rates and AUCs on these datasets is examined.

5.1 Toy Example

In this example, a set of data points is generated from the tropical normal distribution as defined in Eq. (6) using rejection sampling.

Fig. 2
figure 2

Scatterplot of 200 points—100 dots for class 0 and 100 Xs for class 1, black for misclassified and grey otherwise—imposed upon a contour plot of the probability of inclusion in class 0, where the black contour is the classification threshold. The deviation parameters used in data generation were \(\sigma _0=1,\sigma _1=5\) and the centre of the distribution (white-filled point) is the origin. The centres of the two distributions are \(\omega _0 = \omega _1\)

The data points are defined in the tropical projective torus \(\mathbb {R}^e/\mathbb {R}{} \textbf{1}\), which is isomorphic to \(\mathbb {R}^{e-1}\). To map \(x \in \mathbb {R}^e/\mathbb {R}{} \textbf{1}\) to \(\mathbb {R}^{e-1}\), simply set the last component of x to 0, or in other words \(x \mapsto (x_1-x_e,x_2-x_e, \dots , x_{e-1} - x_e)\). For illustration purposes, it is desirable to plot points in \(\mathbb {R}^2\), so we use \(e=3\) which corresponds to phylogenetic trees with 3 leaves. Both the one-species model and the two-species model are examined.

Fig. 3
figure 3

(left) Generalization error for 9 different deviation ratios. The estimator \(\hat{\omega } = (0.3,0,3)\) differs from the true parameter \(\omega = (0,0)\). The upper and lower bounds of Proposition 5 are plotted in dashed lines and the generalization error for the correct estimator \(\hat{\omega }= \omega ^*\) plotted in solid line. The dots represent the proportion of misclassified points from a set of 2000 points in each experiment, 1000 points for each class. (right) Generalization errors for 7 different dispersion parameters with black markers for the two-species tropical logistic regression and white markers for the classical logistic regression. The upper bound (14) of Proposition 6 is plotted in dashed line

In the case of the former, \(\omega = \omega _0 = \omega _1\) and \(\sigma _0 \ne \sigma _1\). The classification boundary in this case is a tropical circle. If \(\sigma _0 < \sigma _1\), the algorithm classifies points close to the inferred centre to class 0 and those that are more dispersed away from the centre as class 1. For simplicity, the centre is set to be the origin \(\omega =(0,0,0)\) and no inference is performed. In Fig. 2 a scatterplot of of the two classes is shown, where misclassified points are highlighted. As anticipated from Proposition 5 there are more misclassified points from the more dispersed class (class 1). Out of 100 points for each class, there are 7 and 21 misclassified points from class 0 and 1 respectively, while the theoretical probabilities calculated from Eq. (11) of Proposition 5 are \(9\%\) and \(19\%\) respectively.

Fig. 4
figure 4

Scatterplot of points—dots for class 0 and X for class 1, black for misclassified according to (left) classical logistic regression or (right) tropical logistic regression, and grey otherwise—alongside a contour plot of the probabilities, where the black contour is the classification threshold. The centres, drawn as big white dots, are \(\omega _0 = (0,0,0), \omega _1=(3,2,0)\) and \(\sigma = 0.5\)

Varying the deviation ratio \(\sigma _1/\sigma _0\) in the data generation process allows exploration of its effect on the generalization error in the one-species model. The closer this ratio is to unity, the higher the generalization error. For \(\sigma _0 = \sigma _1\) the classes are indistinguishable and hence any model is as good as a random guess i.e. the generalization error is 1/2. The estimate of the generalization error for every value of that ratio is the proportion of misclassified points in both classes. Assuming an inferred \(\omega \) that differs from the true parameter, Fig. 3(left) verifies the bounds of Proposition 5.

For the two-species model, tropical logistic regression is directly compared to classical logistic regression. Data is generated using different centres \(\omega _0 = (0,0,0),\) \(\omega _1 = (3,2,0)\) but the same \(\sigma =0.5\). The classifier is \(C(x)=\mathbb {I}(h(x)>0)\) for both methods, using h as defined in Eqs. (5) and (10) for the classical and tropical logistic regression respectively. Figure 4 compares contours and classification thresholds of the classical (left) and tropical (right) logistic regression by overlaying them on top of the same data. Out of \(100+100\) points there are \(5+4\) and \(4+3\) misclassifications in classical and tropical logistic regression respectively. Figure 3(right) visualizes the misclassification rates of the two logistic regression methods for different values of dispersion \(\sigma \), showing the tropical logistic regression to have consistently lower generalization error than the classical, even in this simple toy problem.

Finally, we investigate the convergence rate of the Fermat–Weber points and of the MLEs from the two-species model as the sample size N increases. Fixing \(\omega _0^* = (0,0,0)\) and \(\omega _1^* = (3,2,0)\) as before, the Fermat–Weber point numerical solver and the log-likelihood optimization solver are employed to find \((\tilde{\omega }_0)_N\) and \(( (\hat{\omega }_0)_N, (\hat{\omega }_1)_N, \hat{\lambda }_N )\) respectively. From this, the error is computed for the two methods, which is defined as \(d_N = d_\textrm{tr}((\omega _0)_N,\omega _0^*) \) for \((\omega _0)_N= (\tilde{\omega }_0)_N\) and \((\hat{\omega }_0)_N\) respectively. For each N, we repeat this procedure 100 times to get an estimate of the mean error rate \(r_N = \mathbb {E}\left( d_N \right) \). Figure 5 shows that for both methods, \(r_N \sqrt{N} \rightarrow C\) as \(N \rightarrow \infty \), with \(C_\textrm{FW} < C_\textrm{MLE}\). Since \(\mathbb {E}(\sqrt{N} d_N) \rightarrow C\), it follows that \(\sqrt{N} d_N = \mathcal {O}_p(1)\) as \(N \rightarrow \infty \). This supports the assumption of Sect. 3 that Fermat–Weber points can be used in lieu of MLEs, since they converge to each other in probability at rate \(1/\sqrt{N}\). Interestingly, the MLEs produce higher errors than FW points. This may be due to an imperfection of the MLE solver, which may be stuck at a local optimum.

Fig. 5
figure 5

Expected asymptotic error for FW points \((\tilde{\omega }_0)_N\) (in black) and MLE points \((\hat{\omega }_0)_N\) (in grey) for different values of N. Error is defined as the tropical distance from the true centre \(\omega _0^*\) i.e. \(d_\textrm{tr}(\omega _N,\omega _0^*)\). The dashed lines are \(y \propto N^{-0.5}\), so this figure illustrates that \(d_\textrm{tr}((\omega _0)_N,\omega _0^*) = \mathcal {O}_p(1/\sqrt{N})\) as \(N\rightarrow \infty \)

5.2 Coalescent Model

The data that have been used in our simulations were generated under the multispecies coalescent model, using the python library dendropy (Sukumaran and Holder 2010). The classification method we propose is the two-species model because two distinct species tree have been used to generate gene tree data for each class.

Two distinct species trees are used, which were randomly generated under a Yule model. Then, using dendropy, 1000 gene trees are randomly generated for each of the two species. The trees have 10 leaves and so the number of the model variables is \(\left( {\begin{array}{c}10\\ 2\end{array}}\right) = 45\). They are labelled according to the species tree they are generated from. The tree generation is under the coalescent model for specific model parameters.

Since the species trees are known, we conduct a comparative analysis between classical, tropical and a BHV-based (Billera et al. 2001) logistic regression. In the “Appendix”, we show an approximation analog of our model to the BHV metric. The comparative analysis includes the distribution fitting of distances and the misclassification rates for different metrics.

In Fig. 6, the distribution of the radius \(d(X,\omega )\) as given by Proposition 4, is fitted to the histograms of the Euclidean and tropical distances of gene trees to their corresponding species tree, along with the corresponding pp-plots on the right. According to Proposition 4, for both the classical and tropical Laplace distributed covariates, \(d(X,\omega ^*) \sim \sigma \textrm{Gamma}(n)\), shown in solid lines in Fig. 6, where \(n = e = 45\) and \(n=e-1=44\) for the classical and tropical case respectively. Similarly, for normally distributed covariates, \(d(X,\omega ^*) \sim \sigma \sqrt{\chi _{n}^2}\), shown in dashed lines. It is clear that Laplacian distributions produce better fits in both geometries and that the tropical Laplacian fits the data best. As discussed in Sect. 2.2, the same analysis can not be applied to the BHV metric, because the condition of Proposition 4 does not hold.

Species depth \(\text {SD}\) is the time since the speciation event between the species and effective population size N quantifies genetic variation in the species. Datasets have been generated for a range of values \(R:= \textrm{SD}/N \) by varying species depth. For low values of R, speciation happens very recently and so the gene trees look very much alike. Hence, classification is hard for datasets with low values of R and vice versa, because the gene deviation \(\sigma _R\) is a decreasing function of R. We expect classification to improve in line with R. Figures 11 and 12 in “Appendix H” confirm that, by showing that as R increases the receiver operating characteristic (ROC) curves are improving and the Robinson–Foulds and tropical distances of inferred (Fermat–Weber point) trees are decreasing. In addition, Fig. 7 shows that as R increases, AUCs increase (left) and misclassification rates decrease (right). It also shows that tropical logistic regression produces higher AUCs than classical logistic regression and other out-of-the-box ML classifiers such as random forest classifier, neural networks with a single sigmoid output layer and support vector machines. Our model also produces lower misclassification rates than both the BHV and classical logistic regression. Finally, note that the generalization error upper bound as given in Eq. (14) is satisfied but it not very tight (dashed line in Fig. 7).

Fig. 6
figure 6

(Left) Histograms of the distances of 1000 gene trees from the species trees that generated them under the coalescent model with \(R=0.7\). Coral and blue corresponds to tropical and euclidean geometries respectively. The solid and dashed lines are fitted distributions \(\sigma \textrm{Gamma}(n)\) and \(\sigma \sqrt{\chi ^2_{n}}\) respectively; \(\sigma \) is chosen to be the MLE, derived in the “Appendix”. Euclidean metric has worse fit than the tropical metric. This can also be observed by the corresponding pp-plots (right) (Color figure online)

Fig. 7
figure 7

(left) Average AUCs against R. Five classification models which we considered are the tropical two species-tree model (TLR), random forest classifier (RFC), support vector machines (SVM), neural networks (NN) and classical logistic regression (CLR). We used default set up for TLR, SVM, NN and CLR implemented by sklearn. (right) the x-axis represents the ratio R and the y-axis represents misclassification rates. Black circles represent the tropical logistic regression, white circles represent the classical logistic regression, grey points represent the logistic regression with BHV metric, and the dashed line represents the theoretical generalization error shown in Proposition 6 (color figure online)

5.3 Convergence of Mr Bayes

Mr Bayes (Huelsenbeck et al. 2001) is a widely used software for Bayesian inference of phylogeny using MCMC to sample the target posterior distribution. An important feature of the software is the diagnostic metrics indicating whether a chain has converged to the equilibrium distribution. This is calculated at regular, specified intervals, set by the variable diagnfreq, using the average standard deviation of split frequencies (ASDSF introduced by Lakner et al. (2008)) between two independently run chains. The more similar the split frequencies between the two chains are, the lower the ASDSF, and the more likely it is that both chains have reached the equilibrium distribution.

Our classification model provides an alternative convergence criterion for MCMC convergence. Consider two independently run chains; the sampled trees of the two chains correspond to two classes and the AUC value is a measure of how distinguishable the two chains are. High values of AUC are associated with easily distinguishable chains, implying that the chains have not converged to the equilibrium distribution. At every iteration that is a multiple of diagnfreq, the ASDSF metric is calculated and the AUC of the two chains is found by applying tropical logistic regression to the truncated chains that only keep the last \(30 \%\) of the trees in each chain.

For our comparison study, the data used were the gene sequences from the primates.nex file. This dataset comes with the Mr Bayes software and it is used as an example in Ronquist et al. (2005). Figure 8 shows the two metrics at different iterations of the two independent chains ran on this dataset. According to the Mr Bayes manual, the convergence threshold for their metric is \(10^{-2}\). This is achieved at the 800th iteration, when our method produces an AUC of \(97\%\), which indicates that the chains may have not converged yet, contrary to the suggestion of Mr Bayes. A likely explanation for this discrepancy is the dependence of ASDSF on tree topologies instead of branch lengths. The frequencies of the tree topologies may have converged to those of the equilibrium distribution, even if the branch lengths have not. Eventually, the AUC values drop rapidly when iterations exceed \(2\times 10^3\), while the ASDSF metric is reduced at a much slower rate. In this second phase, the branch lengths are calibrated, while the topology frequencies do not change a lot. Finally, for iterations that exceed \(10^5\), neither metric can reject convergence, with ASDSF being 10 lower than the threshold and the AUC values finally dropping below \(70\%\), which is a typical threshold for poor classification.

When our classification method is compared to other classifiers, it marginally outperforms classical logistic regression and neural networks with a single sigmoid output but underperforms support vector machines and random forest classifiers (Fig. 9). Despite their simplicity, logistic regression models cannot capture the complexity of the chain classification problem. More advanced statistical methods that conform to tropical geometry (such as tropical support vector machines Yoshida et al. 2023a) could be applied instead at the cost of simplicity and interpretability.

Fig. 8
figure 8

(Left) Average ASDSF (in red) and AUC (in blue) values plotted against the number of iterations of the MCMC chains. The coloured dashed lines correspond to the first and third quartile. The grey dashed line indicates the Mr Bayes threshold for ASDSF and our provisional AUC threshold of \(80\%\). (Right) ASDSF and AUC values plotted against each other, with the iterations coloured according to the colourbar and the dashed lines corresponding to the thresholds for each metric (Color figure online)

Fig. 9
figure 9

Average AUC values plotted against the number of MCMC iterations for the 5 supervised learning methods considered (color figure online)

6 Discussion

In this paper we developed a tropical analog of the classical logistic regression model and considered two special cases; the one species-tree model and two species-tree model. In our empirical work the two-species model was most effective, but we anticipate both are potentially impactful tools for phylogenomic analysis. The one-species model’s principal benefit is having the same number of parameters as the number of predictors, unlike the two-species model which has almost twice as many. Therefore, the one-species model more readily fits the standard definition of a generalized linear model and could generalize to a stack of GLMs to produce a “tropical” neural network, which is investigated in Yoshida et al. (2023b).

The two-species model implemented on data generated under the coalescent model outperformed classical and BHV logistic regression models in terms of misclassification rates, AUCs and fitness of the distribution of distances to their centre. It was also observed that Laplacian distributions were better fitting than Gaussians, for both geometries. Empirically selecting tropical distributions over Euclidean distributions suffices for the scope of this paper, but further theoretical justification of the suitability of such distributions is needed. Moreover, further research on the generalization error for the two-species model would provide tighter bounds.

Finally, the AUC metric of our model is proposed as an alternative to the ASDSF metric for MCMC convergence checking. Our metric is more conservative and robust, taking branch lengths into account. Nonetheless, computing the ASDSF is less computationally intensive than running our method. There seems to be a tradeoff between the reliability of the convergence criterion tool and computational speed. Further research can shed light on the types of datasets where the ASDSF metric becomes unreliable. Then, the two metrics could complement each other, with our methods applied only when there is a good indication that ASDSF is unreliable.