Regression Models for Compositional Data: General Log-Contrast Formulations, Proximal Optimization, and Microbiome Data Applications

Compositional data sets are ubiquitous in science, including geology, ecology, and microbiology. In microbiome research, compositional data primarily arise from high-throughput sequence-based profiling experiments. These data comprise microbial compositions in their natural habitat and are often paired with covariate measurements that characterize physicochemical habitat properties or the physiology of the host. Inferring parsimonious statistical associations between microbial compositions and habitat- or host-specific covariate data is an important step in exploratory data analysis. A standard statistical model linking compositional covariates to continuous outcomes is the linear log-contrast model. This model describes the response as a linear combination of log-ratios of the original compositions and has been extended to the high-dimensional setting via regularization. In this contribution, we propose a general convex optimization model for linear log-contrast regression which includes many previous proposals as special cases. We introduce a proximal algorithm that solves the resulting constrained optimization problem exactly with rigorous convergence guarantees. We illustrate the versatility of our approach by investigating the performance of several model instances on soil and gut microbiome data analysis tasks.


Introduction
Compositional data sets are ubiquitous in many areas of science, spanning such disparate fields as geology and ecology. In microbiology, compositional data arise from high-throughput sequence-based microbiome profiling techniques, such as targeted amplicon sequencing (TAS) and metagenomic profiling. These methods generate large-scale genomic survey data of microbial community compositions in their natural habitat, ranging from marine ecosystems to host-associated environments. Elaborate bioinformatics processing tools [5,6,13,17,28] typically summarize TAS-based sequencing reads into sparse compositional counts of operational taxonomic units (OTUs). The quantification of the relative abundances of OTUs in the environment is often accompanied by measurements of other covariates, including physicochemical properties of the underlying habitats, variables related to the health status of the host, or those coming from other highthroughput protocols, such as metabolomics or flow cytometry.
An important step in initial exploratory analysis of such data sets is to infer parsimonious and robust statistical relationships between the microbial compositions and habitat-or host-specific measurements. Standard linear regression modeling cannot be applied in this context because the microbial count data carry only relative or compositional information. One of the most popular approaches to regression modeling with compositional covariates is the log-contrast regression model, originally proposed in [2] in the context of experiments with mixtures. The linear log-contrast model expresses the continuous outcome of interest as a linear combination of the log-transformed compositions subject to a zerosum constraint on the regression vector. This leads to the intuitive interpretation of the response as a linear combination of log-ratios of the original compositions. In a series of papers, the linear log-contrast model has been generalized to the high-dimensional setting via regularization. The sparse linear log-contrast model, introduced in [20], considers variable selection via 1 regularization and has been extended (i) to multiple linear constraints for sub-compositional coherence across predefined groups of predictors [30]; (ii) to sub-composition selection via treestructured sparsity-inducing penalties [33]; (iii) to longitudinal data modeling via a constraint group lasso penalty [32]; and (iv) to outlier detection via a mean shift modeling approach [23]. A common theme of these statistical approaches to logcontrast modeling is the formulation of the estimators as the solution of a convex optimization problem, and the theoretical analysis of the statistical properties of these estimators under suitable assumptions on the data.
In the present paper, we take a complementary approach and focus on the structure of the optimization problems underlying log-contrast modeling. We propose an general optimization model for linear log-contrast regression which includes previous proposals as special cases and allows for a number of novel formulations that have not yet been explored. A particular feature of our model is the joint estimation of regression vectors and associated scales for log-contrast models, similar to the scaled Lasso approach in high-dimensional linear regression [31]. This is achieved by leveraging recent results on the connection between Statistics in Biosciences (2021) 13:217-242 perspective functions and statistical models [8][9][10]. We introduce a Douglas-Rachford splitting algorithm that produces an exact solution to the resulting constrained optimization problems with rigorous guarantees on the convergence of the iterates. By contrast, most existing approaches to solve such problems proceed by first approximating it and then employing coordinate descent methods with less demanding convergence guarantees. We illustrate the versatility of our modeling approach by applying novel log-contrast model instances to environmental and gut microbiome data analysis tasks.

Linear Log-Contrast Models
We first introduce the statistical log-contrast data formation model under consideration. We then review several prominent estimators for regularized log-contrast regression models.

Statistical Log-Contrast Data Formation Model
Let Z be a known (n × p)-dimensional compositional design matrix with rows In the microbiome context, each row represents a composition of p OTUs or components at a higher taxonomic rank. We apply a log transform (x i ) 1⩽i⩽n = (log z i ) 1⩽i⩽n resulting in the design matrix X ∈ ℝ n×p . In this context, we introduce the following log-contrast data formation model.

Model 1
The vector y = ( i ) 1⩽i⩽n ∈ ℝ n of observations is where X ∈ ℝ n×p is the aforementioned design matrix with rows (x i ) 1⩽i⩽n , b ∈ ℝ p is the unknown regression vector (location), o ∈ ℝ n is the unknown mean shift vector containing outliers, e ∈ ℝ n is a vector of realizations of i.i.d. zero mean random variables, S ∈ [0, +∞[ n×n is a diagonal matrix the diagonal of which are the (unknown) standard deviations, and C ∈ ℝ p×K is a matrix expressing K linear constraints on the regression vector.
The linear log-contrast data formation model is similar to the standard (heteroscedastic) linear model with the important difference that there are linear equality constraints on the regression vector. This stems from the fact that the entries in X ∈ ℝ n×p are not independent due to the compositional nature. In the original model [2], the constraint matrix C ∈ ℝ p×K is the p-dimensional all-ones vector p , resulting in a zero-sum constraint on the regression vector. To gain some intuition about the implications of this constraint, consider a two-dimensional example with given estimates b = ( 1 , 2 ) , and denote by i,1 and i,2 the first and second column entries of X. The linear equality constraint enforces 2 = − 1 , and thus each observation can be expressed as

3
Due to the construction of the design matrix as the log transformation of the compositions, this model is equivalent to which expresses the response as a linear function of the log-ratios of the original compositional components. This example also shows that the regression coefficients in the log-contrast model bear a different interpretation than in the standard linear model. Combined log-ratio coefficients relate the response to log-fold changes of the corresponding component ratios.

Sparse Log-Contrast Regression
In the low-dimensional setting, the standard log-contrast model with zero-sum constraints can be estimated by solving a least-squares problem subject to a linear constraint, or alternatively, via standard linear regression applied to isometrically log-ratio transformed compositions [14]. In the high-dimensional setting, we need structural assumptions on the regression vector for consistent estimation. To this end, the sparse log-contrast model was introduced in [20]. It is based on the optimization problem where ‖ ⋅ ‖ 1 is the 1 norm and ∈ [0, +∞[ is a tuning parameter that balances model fit and sparsity of the solution. The estimator enjoys several desirable properties, including scale invariance, permutation invariance, and selection invariance. The latter property is intimately related to the principle of sub-compositional coherence [1] and means that the estimator is unchanged if one knew in advance the sparsity pattern of the solution and applied the procedure to the sub-compositions formed by the nonzero components. In [20], model consistency guarantees are derived for the estimator and the underlying optimization problem is approached via penalization. The proposed iterative algorithm alternates between estimating the Lagrange multipliers and solving a convex subproblem with a coordinate descent strategy. Model selection for the regularization parameter is performed with a generalized information criterion.

Sparse Log-Contrast Regression with Side Information
In many situations, it is desirable to incorporate side information about the covariates into log-contrast modeling. For instance, for microbial compositions, each component can be associated with taxonomic or phylogenetic information, thus relating the p components through a rooted taxonomic or phylogenetic tree T p . One way to use this hierarchical tree information is to perform log-contrast regression at a higher taxonomic level, effectively reducing the dimensionality of the regression problem. Let T p be a tree with 1 ⩽ i h ⩽ h levels and p leaves and assume that, at given level i h , the p compositions split into K groups with sizes (p k ) 1⩽k⩽K . Sub-compositional coherence across the groups can be expressed by the linear constraints C ⊤ b = 0 , where C is an orthogonal (p × K)-dimensional binary matrix. The kth column comprises p k ones at the coordinates of the components that belong to the kth group. Sparse log-contrast regression with group-level compositional coherence can thus be achieved by solving the optimization problem where ∈ [0, +∞[ is a tuning parameter. In [30], model consistency guarantees are derived for this estimator as well as a debiasing procedure for the final estimates. This is done by extending results from [16] to the log-contrast setting. In [20], the underlying optimization problem is approached via an augmented Lagrangian approach, while model selection is achieved by scaling a theoretically derived 0 with a data-driven heuristic estimate of the standard deviation [31], resulting in = 0 . An alternative way of incorporating tree information has been proposed in [33]. There, the tree structure is encoded in a parameterized matrix J ∈ ℝ m−1×p , where m is the number of vertices in the tree. An estimator based on the minimization problem is proposed, where ∈ [0, +∞[ is a tuning parameter. The structure of J promotes tree-guided sparse sub-composition selection and comprises a weighting parameter ∈ [0, 1] . The authors of [33] are unable to solve the optimization in (6) exactly and resort to a heuristic that abandons the linear constraints and solves a generalized Lasso problem instead. The two tuning parameters and are selected via an information criterion.

Robust Log-Contrast Regression
The previous estimators assume the response to be outlier-free with respect to the statistical model under consideration. One way to relax this assumption and to guard against outliers in the response is to use a robust data fitting term. In [23], the robust log-contrast regression is introduced via mean shift modeling; see, e.g., [3,29]. One specific instance of this framework considers the estimation problem and where nonzero elements in the mean shift vector o ∈ ℝ n capture outlier data, and 1 and 2 are tuning parameters. In [25], the objective function in (7) is approximated in the form of (5) with a single tuning parameter. As shown in [3] for partial linear models and in [29] for outlier detection, an equivalent form of (7) is to use the Huber function [15] as robust data fitting function and the 1 norm as regularizer. The Huber function is defined as where ∈]1, +∞[ is a fixed parameter with default value = 1.345 that determines the transition from the quadratic to the linear part. The model in (7) can be written as After model estimation, each data point in the linear region of the Huber function is considered an outlier. The latter two models thus allow for joint sparse selection of predictors and outliers in a convex framework.

Optimization of General Log-Contrast Models
We introduce an optimization model for general log-contrast regression that includes all previous examples as special cases. We assume that the data follow the data formation model outlined in Model 1. Our model belongs to the class of perspective M-estimation models [10] and allows for joint estimation of regression parameters and corresponding scales while preserving the overall convexity of the model. We then present a proximal algorithm that can solve instances of the optimization model with theoretical guarantees on the convergence of the iterates. Finally, we propose two model selection schemes for practical regularization parameter selection that leverage the joint scale estimation capability of our optimization model.

Convex Optimization Model
Let us first introduce some notation (see [4,27] for details). We denote by 0 (ℝ n ) the class of lower semicontinuous convex functions ∶ ℝ n →] − ∞, +∞] such that dom = x ∈ ℝ n | | (x) < +∞ ≠ ∅ . Given ∈ 0 (ℝ n ) and x ∈ ℝ n , the unique minimizer of + ‖x − ⋅‖ 2 2 ∕2 is denoted by prox x . In other words Now let D be a convex subset of ℝ n . Then D is the indicator function of D (it takes values 0 on D and +∞ on its complement), ri D is the relative interior of D (its interior relative to its affine hull), and, if D is nonempty and closed, proj D = prox D is the projection operator onto D.
The following general log-contrast optimization model enables the joint estimation of the regression vector b = ( k ) 1⩽k⩽p ∈ ℝ p and of the scale vector s = ( i ) 1⩽i⩽N ∈ ℝ N in Model 1 within a convex optimization setting.
be the perspective of i , let X i ∈ ℝ n i ×p , and let y i ∈ ℝ n i be such that Finally, set and, for every i ∈ {1, … , M} , let i ∈ 0 (ℝ m i ) and L i ∈ ℝ m i ×p . The objective is to Remark 1 Problem 1 comprises four main components which are associated with different aspects of the general log-contrast regression model.  (14) remains a convex one in (s, b). -Problem 1 allows for the partitioning of the design matrix X and response y into N blocks with individual scale parameters ( i ) 1⩽i⩽N . This is beneficial when data from multiple measurement sources are available for the prediction of the response or when heteroscedasticity in the design matrix is expected for different groups of measurements. Introducing multiple scales has also numerical advantages. Indeed, as discussed in [10], certain proximity operators of perspective functions are easier to compute in separable form. -The vector subspaces D and E (see (13) Perspective functions are discussed in [4,[8][9][10]27]. The construction (11) guarantees that (∀i ∈ {1, … , N}) ̃i ∈ 0 (ℝ n i ) . We provide below two examples of perspective functions that will be used in the numerical investigations of Sect. 4.

Algorithm
Our algorithmic solution method to solve Problem 1 relies on an application of the Douglas-Rachford splitting algorithm in a higher-dimensional space. To describe our methodology let us first note that, since (14) involves non differentiable functions and hard constraints, it cannot be handled via methods which employ gradients. Rather, we must proceed with nonsmooth first order methods, i.e., methods which activate the functions present in the model via their proximity operators defined in (10). Let us consider the basic problem of minimizing the sum of two lower semicontinuous convex functions F and G in a Euclidean space H , i.e., Let us assume that this problem has a least one solution. A key property of the proximity operator prox F is that its set of fixed points is the set of minimizers of F [4,Proposition 12.29]. A naive approach to solve (23) would therefore be to construct iteratively a fixed point of prox F+G . However, this is not viable because prox F+G is typically intractable. On the other hand, in many instances, the operators prox F and prox G are computable explicitly, which suggest that we design a splitting algorithm, i.e., one in which F and G are activated separately. The most popular splitting algorithm to solve (23) is the Douglas-Rachford algorithm [4,7,11,12,19,21]. This algorithm exploits the following remarkable fact: given an arbitrary ∈]0, +∞[ , if a point v ∈ H satisfies the fixed point property (19) prox̃( , u) = 0, 1 − |u| u .
Our method for solving Problem 1 (Algorithm 3 below) is an implementation of (25) in a suitably constructed product space. The details of this construction are provided in Appendix A. To present the algorithm, it is convenient to introduce the matrices together with the function and to define, for every iteration index k ∈ ℕ , the vectors

Proposition 1 Consider the setting of Problem 1. Suppose that and that
Then Problem 1 has at least one solution. Now let (s k ) k∈ℕ and (b k ) k∈ℕ be sequences generated by Algorithm 3. Then (s k ) k∈ℕ converges to some s ∈ ℝ N and (b k ) k∈ℕ converges to some b ∈ ℝ p such that (s, b) solves Problem 1.

Proof See Appendix A. ◻
In most practical situations, (30) and (31) are typically satisfied. For example the following describes a scenario that will be encountered in Sect. 4.

Proposition 2
Consider the setting of Problem 1 and suppose that the following additional properties hold:

Model Selection
In the context of log-contrast regression, a number of different model selection strategies have been proposed, including stability selection [20,22] and Generalized Information Criteria [32]. In [30], a scale-dependent tuning parameter has been derived where the optimal scale has been found via line search. Our joint scale and regression modeling approach makes this line search obsolete, thus yielding a parameter-free model selection scheme. More specifically, we consider two model selection schemes. Firstly, following [30], we consider where q n (t) = n −1∕2 −1 (1 − t) , −1 is the quantile function for the standard normal distribution, and r is the solution to the equation r = q 4 1 (r∕p) + 2q 2 1 (r∕p) . In practice, this data-independent model selection scheme may lead to inclusion of spurious coefficients. To assess the robustness of the inferred solutions we combine this theoretically derived regularization with stability selection [22]. The original stability selection approach selects, for every subsample, a small set of predictors from the regularization path, e.g., the first q predictors that appear along the path or the q coefficients that are largest in absolute value across the entire path. We here propose to select, for every subsample, the nonzero coefficients present at regularization parameter 0 . Note that 0 is sample-size dependent and hence needs to be adapted to the specific subsample size used in stability selection. As default values, we consider a subsample size of ⌈n∕2⌉ and generate 100 subsamples. The key diagnostic in stability selection is the selection frequency profile for each coefficient. To select a stable set of coefficients, a threshold parameter t s ∈ [0.6, 0.9] is recommended [22], where all coefficients with selection frequency above t s are included in the final model.

Applications to Compositional Microbiome Data
We apply several instances of the general log-contrast model outlined in Problem 1 in the context of microbiome data analysis tasks. We set M = 1 , m 1 = m , L 1 = Id , and employ as a regularization function the 1 norm 1 = ‖ ⋅ ‖ 1 . We use the functions in Examples 1 and 2 as instances of the perspective loss functions ̃1 . We refer to these instances as Least Squares and Huber log-contrast model, respectively. Thus, in case of the Least Squares model, (14) becomes

while in the case of the Huber model it becomes
Note that the projection of a vector s ∈ ℝ n onto D, as required in Algorithm 3, is given by Dependent on the application, we use different zero-sum constraints on b as specified by the matrix C. To solve the various instances of Problem 1, we use Algorithm 3 and set the parameter k = 1.9 and = 1 . We consider that the algorithm has converged when ‖b k − b k+1 ‖ 2 < 10 −6 . All computational experiments are fully reproducible with the code available at https ://githu b.com/muell sen/PCM/tree/maste r/examp les/LogCo ntras tMode ls.

Body Mass Index Prediction from Gut Microbiome Data
We first consider a cross-sectional study that examines the relationship between diet and gut microbiome composition, where additional demographic covariates, including body mass index (BMI) are available, referred to as COMBO data set [34]. After pre-processing and filtering, the data set comprises the log-transformed relative abundances of p = 87 taxa at the genus level across n = 96 healthy subjects. Following previous analyses [20,30], we investigate the relationship between BMI and the microbial compositions in a log-contrast regression framework. We use C = p to model the standard zerosum constraint. In addition to the compositional covariates, two covariate measurements, fat and calorie intake, are also taken into account via joint unpenalized least squares. We investigate the influence of different loss functions, Least Squares and Huber, as well as the sub-compositional constraints on the quality of the estimation, the size of the support set, and the predictive power. Further numerical results can be found in Appendix C.
To highlight the ability of the algorithm to jointly estimate regression and scale we solve the two problems over the regularization path with 40 values on a log-linear grid in [0.0069, … , 0.6989] . We also consider the theoretically derived regularization parameter 0 = 0.1997 from (32). Figure 3a and b show the solution path of the regression vector b for the sparse Least Squares log-contrast model and the Huber model, respectively. Figure 3c displays the corresponding joint scale estimates for the Least Squares and the Huber models. The estimated regression coefficients at 0 are highlighted in Fig. 3d. Both models agree on a set of six genera, including Clostridium as strongest negative and Acidaminococcus as the strongest positive predictors. This implies that the log-ratio of Acidaminococcus to Clostridium is positively associated with BMI. Other genera include Alistipes, Megamonas, and Coprobacillus with negative coefficients, and Dorea with positive coefficient. In [20,30], the genera Alistipes, Clostridium, Acidaminococcus, and Allisonella have been identified as key predictors. The solutions of the perspective log-contrast models corroborates these finding for Clostridium and Acidaminococcus, and to a less extent to Alistipes, whereas the genus Allisonella has only a small strictly positive coefficient in both log-contrast models (Fig. 3d). Next, we consider the stability selection scheme introduced in Sect. 3.3 with default parameters and threshold t s = 0.7 . Figure 4a shows the stability-based frequency profile for the sparse Least Squares and Huber log-contrast models. For both models, only Clostridium and Acidaminococcus are selected. Stability selection thus leads to a simple explanatory log-ratio model formed by the ratio of the relative abundances of Acidaminococcus to Clostridium. However, when considering the final model prediction results, as shown in Fig. 4b for the Huber model, this model can only explain normal to overweight participants (BMI 20-30) because 34 out of 96 participants are considered outliers in the Huber model. The overall refitted R 2 is 0.19 under the Huber model but increases to 0.43 for the 62 inlier participants.
Next, we investigate the influence of sub-compositional constraints on the stability selection frequency for the two estimation procedures. We follow the analysis of Huber log-contrast model on the full COMBO data. c Solution path of the scale estimates for both log-contrast models on the full COMBO data. d Comparison of the regression estimates of both models at regularization parameter 0 on the full data set. Both models agree on the two strongest predictors, the genera Clostridium and Acidaminococcus (Color figure online) 1 3 [30] and consider a subset of 45 genera that have the highest relative abundances in the data set. These 45 genera belong to K = 4 distinct phyla: Actinobacteria (two genera), Bacteroides (eight genera), Firmicutes (32 genera), and Proteobacteria (three genera). The constraint matrix C is hence an orthogonal (45 × 4)-dimensional binary matrix. Figure 5a and b show stability selection profile for both the Least Squares and the Huber model with and without compositional constraints, respectively. Figure 5c shows the difference in the selection frequency profiles. Although several genera, including Collinsella, Paraprevotella, Parabacteroides, Faecalibacterium, Oscillibacter, and Parasutterela display significant frequency differences, the two genera Clostridium and Acidaminococcus, both belonging to the Firmicutes phylum, demonstrate again the highest stability both with and without sub-compositional constraints.

Relationship Between Soil Microbiome and pH Concentration
We next consider a dataset put forward in [18] comprising n = 88 soil samples from North and South America. Both amplicon sequencing data and environmental covariates, including pH concentrations, are available and have been re-analyzed via a balance tree approach in [24]. The amplicon data contains p = 116 OTUs, and we consider C = p . We perform stability selection with default parameters as outlined in Sect. 3.3. We refer to Appendix D for results regarding variable selection with the theoretical 0 value. The selection frequency of the different regression coefficients is shown Fig. 6a. At stability threshold t s = 0.7 , seven taxa are selected in the Least Squares models, and six taxa in the Huber model, respectively. After re-estimation of the two perspective log-contrast models on the selected subset, two taxa of order Ellin6513, one taxon of family Koribacteraceae, and one taxon of genus Rhodoplanes have negative coefficients whereas two taxa belonging to the genus Balneimonas as well as one Rubrobacter taxon and one taxon of order RB41 have positive coefficients  (Fig. 6b). The seven taxa identified in the Least Squares model thus allow for a compact representation with four log-ratios of compositions. The Huber model with six identified taxa requires only three log-ratios. The five coefficients that are selected in both models agree in coefficient sign but show small variations in coefficient values. The Huber model ( R 2 = 0.86 ) deems 33 data points to be outliers in the final estimate (Fig. 6c). For completeness, we include the mean absolute deviation (MAD) between model estimates and data in Fig. 6d. The selected taxa cover a wide range of average pH levels (as provided in [24]), ranging from 4.9 to 6.75, implying that the learned model may indeed generalize to other soil types not present in the current data set.

Discussion and Conclusion
Finding linear relationships between continuous variables and compositional predictors is a common task in many areas of science. We have proposed a general estimation model for high-dimensional log-contrast regression which includes many Same as (a) but without sub-compositional constraints. c Stability selection frequency differences between the two approaches. Several genera show significant differences. The colors signify the different phyla that the genera belong to and the non-compositional covariates fat and diet intake (Color figure online) previous proposals as special cases [20,23,30,33]. Our model belongs to the class of perspective M-estimation models [10] which allows for scale estimation in the data fitting term while preserving the overall convexity of the underlying model. This is made possible due to recent advances in the theory of perspective functions [8][9][10].
Several data fitting and penalty functions are available in the present framework. For instance, the robust Huber model is a convenient choice when outliers are suspected in the continuous outcome vector, or equivalently, when only a subset of the outcome data is expected to follow a linear log-contrast relationship with the compositional predictors [10,23]. Combined with a sparsity-inducing penalty, the model allows for joint scale estimation, outlier detection, and variable selection in a single framework. Alternative choices for data fitting and regularization models are available in [10]. Our framework also enables sub-compositional coherence across groups of variables, e.g., bacterial phyla in microbiome data, via general linear constraints.
We have introduced a Douglas-Rachford algorithm that can solve the corresponding constrained nonsmooth convex optimization problems with rigorous guarantees on the convergence of the iterates. Furthermore, we have illustrated the viability of our approach on two microbiome data analysis tasks: body mass index (BMI) prediction from gut microbiome data in the COMBO study and pH prediction from soil microbial abundance data. Our joint regression and scale estimation enabled the use of a universal single tuning parameter 0 [30] to control the sparsity of the estimates. We have combined this approach with stability-based model selection [22] to determine sparse stable sets of log-contrast predictors. For the gut microbiome BMI analysis, the robust Huber log-contrast model identified two genera whose log-ratio predicts BMI well for normal to overweight participants while simultaneously identifying outliers with respect to the log-contrast model. In the soil microbiome data set, we derived parsimonious pH prediction models. The Least Squares model requires four log-ratios of microbial compositions and achieves an overall R 2 = 0.88 . The Huber model requires only three log-ratios of microbial compositions with an overall R 2 = 0.86.
Going forward, we believe that the general log-contrast model and the associated optimization and model selection techniques presented here will provide a valuable off-the-shelf tool for log-contrast regression analysis when compositional data such as microbial relative abundance data are used as predictors in exploratory data analysis. Future efforts will include the integration of the presented models in modern computational microbiome analysis software workflows.