Regression models for compositional data: General log-contrast formulations, proximal optimization, and microbiome data applications

Compositional data sets are ubiquitous in science, including geology, ecology, and microbiology. In microbiome research, compositional data primarily arise from high-throughput sequence-based profiling experiments. These data comprise microbial compositions in their natural habitat and are often paired with covariate measurements that characterize physicochemical habitat properties or the physiology of the host. Inferring parsimonious statistical associations between microbial compositions and habitat- or host-specific covariate data is an important step in exploratory data analysis. A standard statistical model linking compositional covariates to continuous outcomes is the linear log-contrast model. This model describes the response as a linear combination of log-ratios of the original compositions and has been extended to the high-dimensional setting via regularization. In this contribution, we propose a general convex optimization model for linear log-contrast regression which includes many previous proposals as special cases. We introduce a proximal algorithm that solves the resulting constrained optimization problem exactly with rigorous convergence guarantees. We illustrate the versatility of our approach by investigating the performance of several model instances on soil and gut microbiome data analysis tasks.


Introduction
Compositional data sets are ubiquitous in many areas of science, spanning such disparate fields as geology and ecology.In microbiology, compositional data arise from high-throughput sequence-based microbiome profiling techniques, such as targeted amplicon sequencing (TAS) and metagenomic profiling.These methods generate large-scale genomic survey data of microbial community compositions in their natural habitat, ranging from marine ecosystems to host-associated environments.Elaborate bioinformatics processing tools [5,6,10,14,23] typically summarize TAS-based sequencing reads into sparse compositional counts of operational taxonomic units (OTUs).The quantification of the relative abundances of OTUs in the environment is often accompanied by measurements of other covariates, including physicochemical properties of the underlying habitats, variables related to the health status of the host, or those coming from other high-throughput protocols, such as metabolomics or flow cytometry.
An important step in initial exploratory analysis of such data sets is to infer parsimonious and robust statistical relationships between the microbial compositions and habitat-or host-specific measurements.Standard linear regression modeling cannot be applied in this context because the microbial count data carry only relative or compositional information.One of the most popular approaches to regression modeling with compositional covariates is the log-contrast regression model, originally proposed in [2] in the context of experiments with mixtures.The linear log-contrast model expresses the continuous outcome of interest as a linear combination of the log-transformed compositions subject to a zero-sum constraint on the regression vector.This leads to the intuitive interpretation of the response as a linear combination of log-ratios of the original compositions.In a series of papers, the linear log-contrast model has been generalized to the high-dimensional setting via regularization.The sparse linear log-contrast model, introduced in [16], considers variable selection via 1 regularization and has been extended (i) to multiple linear constraints for sub-compositional coherence across predefined groups of predictors [25]; (ii) to sub-composition selection via tree-structured sparsity-inducing penalties [28]; (iii) to longitudinal data modeling via a constraint group lasso penalty [27]; and (iv) to outlier detection via a mean shift modeling approach [18].A common theme of these statistical approaches to log-contrast modeling is the formulation of the estimators as the solution of a convex optimization problem, and the theoretical analysis of the statistical properties of these estimators under suitable assumptions on the data.
In the present paper, we take a complementary approach and focus on the structure of the optimization problems underlying log-contrast modeling.We propose an general optimization model for linear log-contrast regression which includes previous proposals as special cases and allows for a number of novel formulations that have not yet been explored.A particular feature of our model is the joint estimation of regression vectors and associated scales for log-contrast models, similar to the scaled Lasso approach in high-dimensional linear regression [26].This is achieved by leveraging recent results on the connection between perspective functions and statistical models [7,8,9].We introduce a Douglas-Rachford splitting algorithm that produces an exact solution to the resulting constrained optimization problems with rigorous guarantees on the convergence of the iterates.By contrast, most existing approaches to solve such problems proceed by first approximating it and then employing coordinate descent methods with less demanding convergence guarantees.We illustrate the versatility of our modeling approach by applying novel logcontrast model instances to environmental and gut microbiome data analysis tasks.

Linear log-contrast models
We first introduce the statistical log-contrast data formation model under consideration.We then review several prominent estimators for regularized logcontrast regression models.

Statistical log-contrast data formation model
Let Z be a known (n × p)-dimensional compositional design matrix with rows In the microbiome context, each row represents a composition of p OTUs or components at a higher taxonomic rank.We apply a log transform (x i ) 1 i n = (log z i ) 1 i n resulting in the design matrix X ∈ R n×p .In this context, we introduce the following log-contrast data formation model.

Model 1
The vector y = (η i ) 1 i n ∈ R n of observations is where X ∈ R n×p is the aforementioned design matrix with rows (x i ) 1 i n , b ∈ R p is the unknown regression vector (location), o ∈ R n is the unknown mean shift vector containing outliers, e ∈ R n is a vector of realizations of i.i.d.zero mean random variables, S ∈ [0, +∞[ n×n is a diagonal matrix the diagonal of which are the (unknown) standard deviations, and C ∈ R p×K is a matrix expressing K linear constraints on the regression vector.
The linear log-contrast data formation model is similar to the standard (heteroscedastic) linear model with the important difference that there are linear equality constraints on the regression vector.This stems from the fact that the entries in X ∈ R n×p are not independent due to the compositional nature.In the original model [2], the constraint matrix C ∈ R p×K is the p-dimensional all-ones vector 1 p , resulting in a zero-sum constraint on the regression vector.To gain some intuition about the implications of this constraint, consider a two-dimensional example with given estimates b = (β 1 , β 2 ), and denote by ξ i,1 and ξ i,2 the first and second column entries of X.The linear equality constraint enforces β 2 = −β 1 , and thus each observation can be expressed as ( Due to the construction of the design matrix as the log transformation of the compositions, this model is equivalent to which expresses the response as a linear function of the log-ratios of the original compositional components.This example also shows that the regression coefficients in the log-contrast model bear a different interpretation than in the standard linear model.Combined log-ratio coefficients relate the response to log-fold changes of the corresponding component ratios.

Sparse log-contrast regression
In the low-dimensional setting, the standard log-contrast model with zero-sum constraints can be estimated by solving a least-squares problem subject to a linear constraint, or alternatively, via standard linear regression applied to isometrically log-ratio transformed compositions [11].In the high-dimensional setting, we need structural assumptions on the regression vector for consistent estimation.To this end, the sparse log-contrast model was introduced in [16].It is based on the optimization problem where • 1 is the 1 norm and λ ∈ [0, +∞[ is a tuning parameter that balances model fit and sparsity of the solution.The estimator enjoys several desirable properties, including scale invariance, permutation invariance, and selection invariance.The latter property is intimately related to the principle of subcompositional coherence [1] and means that the estimator is unchanged if one knew in advance the sparsity pattern of the solution and applied the procedure to the sub-compositions formed by the nonzero components.In [16], model consistency guarantees are derived for the estimator and the underlying optimization problem is approached via penalization.The proposed iterative algorithm alternates between estimating the Lagrange multipliers and solving a convex subproblem with a coordinate descent strategy.Model selection for the regularization parameter λ is performed with a generalized information criterion.

Sparse log-contrast regression with side information
In many situations, it is desirable to incorporate side information about the covariates into log-contrast modeling.For instance, for microbial compositions, each component can be associated with taxonomic or phylogenetic information, thus relating the p components through a rooted taxonomic or phylogenetic tree T p .One way to use this hierarchical tree information is to perform log-contrast regression at a higher taxonomic level, effectively reducing the dimensionality of the regression problem.Let T p be a tree with 1 i h h levels and p leaves and assume that, at given level i h , the p compositions split into K groups with sizes (p k ) 1 k K .Sub-compositional coherence across the groups can be expressed by the linear constraints C b = 0, where C is an orthogonal (p × K)-dimensional binary matrix.The k th column comprises p k ones at the coordinates of the components that belong to the k th group.Sparse log-contrast regression with group-level compositional coherence can thus be achieved by solving the optimization problem where λ ∈ [0, +∞[ is a tuning parameter.In [25], model consistency guarantees are derived for this estimator as well as a debiasing procedure for the final estimates.This is done by extending results from [13] to the log-contrast setting.In [16], the underlying optimization problem is approached via an augmented Lagrangian approach, while model selection is achieved by scaling a theoretically derived λ 0 with a data-driven heuristic estimate of the standard deviation σ [26], resulting in λ = λ 0 σ.An alternative way of incorporating tree information has been proposed in [28].There, the tree structure is encoded in a parameterized matrix J α ∈ R m−1×p , where m is the number of vertices in the tree.An estimator based on the minimization problem is proposed, where λ ∈ [0, +∞[ is a tuning parameter.The structure of J α promotes tree-guided sparse sub-composition selection and comprises a weighting parameter α ∈ [0, 1].The authors of [28] are unable to solve the optimization in (6) exactly and resort to a heuristic that abandons the linear constraints and solves a generalized Lasso problem instead.The two tuning parameters λ and α are selected via an information criterion.

Robust log-contrast regression
The previous estimators assume the response to be outlier-free with respect to the statistical model under consideration.One way to relax this assumption and to guard against outliers in the response is to use a robust data fitting term.In [18], the robust log-contrast regression is introduced via mean shift modeling; see, e.g., [3,24].One specific instance of this framework considers the estimation problem and where nonzero elements in the mean shift vector o ∈ R n capture outlier data, and λ 1 and λ 2 are tuning parameters.In [20], the objective function in (7) is approximated in the form of ( 5) with a single tuning parameter.As shown in [3] for partial linear models and [24] for outlier detection, an equivalent form of ( 7) is to use the Huber function [12] as robust data fitting function and the 1 norm as regularizer.The Huber function is defined as where ρ ∈ ]1, +∞[ is a fixed parameter with default value ρ = 1.345 that determines the transition from the quadratic to the linear part.The model in (7) can be written as After model estimation, each data point in the linear region of the Huber function is considered an outlier.The latter two models thus allow for joint sparse selection of predictors and outliers in a convex framework.

Optimization of general log-contrast models
We introduce an optimization model for general log-contrast regression that includes all previous examples as special cases.We assume that the data follow the data formation model outlined in Model 1.Our model belongs to the class of perspective M-estimation models [9] and allows for joint estimation of regression parameters and corresponding scales while preserving the overall convexity of the model.We then present a proximal algorithm that can solve instances of the optimization model with theoretical guarantees on the convergence of the iterates.Finally, we propose two model selection schemes for practical regularization parameter selection that leverage the joint scale estimation capability of our optimization model.

Convex optimization model
Let us first introduce some notation (see [4,22] for details).We denote by Γ 0 (R n ) the class of lower semicontinuous convex functions ϕ : 2 is denoted by prox ϕ x.Now let D be a convex subset of R n .Then ι D is the indicator function of D (it takes values 0 on D and +∞ on its complement), ri D is the relative interior of D (its interior relative to its affine hull), and, if D is nonempty and closed, proj D = prox ι D is the projection operator onto D.
The following general log-contrast optimization model enables the joint estimation of the regression vector b = (β k ) 1 k p ∈ R p and of the scale vector s = (σ i ) 1 i N ∈ R N in Model 1 within a convex optimization setting.
Problem 1 Consider the setting of Model 1.Let N and M be strictly positive integers, let D be a vector subspace of R N , let (n i ) 1 i N be strictly positive integers such that Finally, set and, for every i ∈ {1, . . ., M }, let Remark 1 Problem 1 comprises four main components which are associated with different aspects of the general log-contrast regression model.
-The perspective functions ( ϕ i ) 1 i N play the role of the loss function in statistical estimation and couple the estimation of the scale vector s and the regression vector b.Because the functions (ϕ i ) 1 i N are convex, the overall minimization problem ( 13) remains a convex one in (s, b).
-Problem 1 allows for the partitioning of the design matrix X and response y into N blocks with individual scale parameters (σ i ) 1 i N .This is beneficial when data from multiple measurement sources are available for the prediction of the response or when heteroscedasticity in the design matrix is expected for different groups of measurements.Introducing multiple scales has also numerical advantages.Indeed, as discussed in [9], certain proximity operators of perspective functions are easier to compute in separable form.-The vector subspaces D and E (see ( 12)) enforce linear constraints on the scale vector s = (σ i ) 1 i N and the regression vector b, respectively.-Additional properties of the regression vector, such as (structured) sparsity, are promoted through the use of the penalization functions (ψ i ) 1 i M and the matrices (L i ) 1 i M .The penalization functions typically contain a free parameter λ the setting of which requires a model selection strategy.

Algorithm
Our algorithmic solution method to solve Problem 1 is patterned after that of [9], which relies on an application of the Douglas-Rachford splitting algorithm in a higher-dimensional space.To present the algorithm and its convergence properties, it is convenient to introduce the matrices together with the function and to define, for every iteration index k ∈ N, the vectors Proposition 1 Consider the setting of Problem 1. Suppose that and that Then Proof See Appendix A.
In most practical situations, ( 26) and ( 27) are typically satisfied.For example the following describes a scenario that will be encountered in Section 4.
Proposition 2 Consider the setting of Problem 1 and suppose that the following additional properties hold: Then ( 26) and ( 27) are satisfied.
Proof See Appendix B.

Model selection
In the context of log-contrast regression, a number of different model selection strategies have been proposed, including stability selection [16,17] and Generalized Information Criteria [27].In [25], a scale-dependent tuning parameter has been derived where the optimal scale has been found via line search.
Our joint scale and regression modeling approach makes this line search obsolete, thus yielding a parameter-free model selection scheme.More specifically, we consider two model selection schemes.Firstly, following [25], we consider where is the quantile function for the standard normal distribution, and r is the solution to the equation r = q 4 1 (r/p) + 2q 2  1 (r/p).In practice, this data-independent model selection scheme may lead to inclusion of spurious coefficients.To assess the robustness of the inferred solutions we combine this theoretically derived regularization with stability selection [17].The original stability selection approach selects, for every subsample, a small set of predictors from the regularization path, e.g., the first q predictors that appear along the path or the q coefficients that are largest in absolute value across the entire path.We here propose to select, for every subsample, the nonzero coefficients present at regularization parameter λ 0 .Note that λ 0 is sample-size dependent and hence needs to be adapted to the specific subsample size used in stability selection.As default values, we consider a subsample size of n/2 and generate 100 subsamples.The key diagnostic in stability selection is the selection frequency profile for each coefficient.To select a stable set of coefficients, a threshold parameter t s ∈ [0.6, 0.9] is recommended [17], where all the coefficients with selection frequency above t s are included in the final model.

Applications to compositional microbiome data
We apply several instances of the general log-contrast model outlined in Problem 1 in the context of microbiome data analysis tasks.We set M = 1, m 1 = m, L 1 = Id , and employ as a regularization function the 1 norm ψ 1 = λ • 1 .We use the functions in Examples 1 and 2 as instances of the perspective loss functions ϕ 1 .We refer to these instances as Least Squares (LS) and Huber log-contrast model, respectively.Thus, in case of the LS model, (13) becomes minimize σ∈R, b∈E while in the case of the Huber model it becomes Note that the projection of a vector s ∈ R n onto D, as required in Algorithm 2, is given by Dependent on the application, we use different zero-sum constraints on b as specified by the matrix C. To solve the various instances of Problem 1, we use Algorithm 2 and set the parameter µ k = 1.9 and γ = 1.We consider that the algorithm has converged when b k − b k+1 2 < 10 −6 .

Body mass index prediction from gut microbiome data
We first consider a cross-sectional study that examines the relationship between diet and gut microbiome composition, where additional demographic covariates, including body mass index (BMI) are available, referred to as COMBO data set [29].After pre-processing and filtering, the data set comprises the log-transformed relative abundances of p = 87 taxa at the genus level across n = 96 healthy subjects.Following previous analyses [16,25], we investigate the relationship between BMI and the microbial compositions in a log-contrast regression framework.We use C = 1 p to model the standard zero-sum constraint.In addition to the compositional covariates, two covariate measurements, fat and calorie intake, are also taken into account via joint unpenalized least squares.We investigate the influence of different loss functions, LS and Huber, as well as the sub-compositional constraints on the quality of the estimation, the size of the support set, and the predictive power.
To highlight the ability of the algorithm to jointly estimate regression and scale we solve the two problems over the regularization path with 40 λ values on a log-linear grid in [0.0069, . . ., 0.6989].We also consider the theoretically derived regularization parameter λ 0 = 0.1997 from (28). Figure 3a and b show the solution path of the regression vector b for the sparse LS log-contrast model and the Huber model, respectively.Figure 3c displays the corresponding joint scale estimates σ for the LS and the Huber model.The estimated regression coefficients at λ 0 are highlighted in Figure 3d.Both models agree on a set of six genera, including Clostridium as strongest negative and Acidaminococcus as the strongest positive predictors.This implies that the log-ratio of Acidaminococcus to Clostridium is positively associated with BMI.Other genera include Alistipes, Megamonas, and Coprobacillus with negative coefficients, and Dorea with positive coefficient.In [16,25], the genera Alistipes, Clostridium, Acidaminococcus, and Allisonella have been identified as key predictors.The solutions of the perspective log-contrast models corroborates these finding for Clostridium and Acidaminococcus, and to a less extent to Alistipes, whereas the genus Allisonella has only a small strictly positive coefficient in both log-contrast models (Figure 3d).
Next, we consider the stability selection scheme introduced in Section 3.3 with default parameters and threshold t s = 0.7. Figure 4a shows the stabilitybased frequency profile for the sparse LS and Huber log-contrast model.For          Next, we investigate the influence of sub-compositional constraints on the stability selection frequency for the two estimation procedures.We follow the analysis of [25] and consider a subset of 45 genera that have the highest relative abundances in the data set.These 45 genera belong to K = 4 distinct phyla: Actinobacteria (two genera), Bacteroides (eight genera), Firmicutes (32 genera), and Proteobacteria (three genera).The constraint matrix C is hence an orthogonal (45 × 4)-dimensional binary matrix.with and without compositional constraints, respectively.Figure 5c shows the difference in the selection frequency profiles.Although several genera, including Collinsella, Paraprevotella, Parabacteroides, Faecalibacterium, Oscillibacter, and Parasutterela display significant frequency differences, the two genera Clostridium and Acidaminococcus, both belonging to the Firmicutes phylum, demonstrate again the highest stability both with and without subcompositional constraints.

Relationship between soil microbiome and pH concentration
We next consider a dataset put forward in [15] comprising n = 88 soil samples from North and South America.Both amplicon sequencing data and environmental covariates, including pH concentrations, are available and have been re-analyzed via a log-ratio approach in [19].The amplicon data contains p = 116 OTUs, and we consider C = 1 p .We perform stability selection with default parameters as outlined in Section 3.3.The selection frequency of the different regression coefficients is shown Figure 6a.At stability threshold t s = 0.7, seven taxa are selected in both models, four Acidobacteria, two Proteobacteria, and one Actinobacteria.After re-estimation of the two perspective log-contrast models on the selected subset, the two taxa of order Ellin6513 and one taxon of family Koribacteraceae have negative coefficients whereas two taxa belonging to the genus Balneimonas as well as one Rubrobacter taxon and one taxon of order RB41 have positive associations (Figure 6b).be outliers in the final estimate (Figure 6c).For completeness, we include the mean absolute deviation (MAD) between the model estimates and the data in Figure 6d.The selected taxa cover a wide range of average pH levels (as provided in [19]), ranging from 4.9 to 6.75, implying that the learned model may indeed generalize to other soil types not present in the current data set.

Discussion and conclusion
Finding linear relationships between continuous variables and compositional predictors is a common task in many areas of science.We have proposed a general estimation model for high-dimensional log-contrast regression which includes many previous proposals as special cases [16,18,25,28].Our model belongs to the class of perspective M-estimation models which allows for scale estimation in the data fitting term while preserving the overall convexity of the underlying model.This is made possible due to recent advances in the theory of perspective functions [7,8,9].We have introduced a Douglas-Rachford algorithm that can solve the corresponding constrained nonsmooth convex optimization problem with rigorous guarantees on the convergence of the iterates.We have illustrated the viability of our approach on two microbiome data analysis tasks: body mass index (BMI) prediction from gut microbiome data in the COMBO study and pH prediction from soil microbial abundance data.Our joint regression and scale estimation enabled the use of a universal single tuning parameter λ 0 [25] to control the sparsity of the estimates.We have combined this approach with stability-based model selection [17] to determine sparse stable sets of log-contrast predictors.For the gut microbiome BMI analysis, the robust Huber log-contrast model identified two genera that can predict BMI well for normal to overweight participants while simultaneously identifying outliers with respect to the log-contrast model.In the soil microbiome data set, we derived a parsimonious pH prediction model which requires only four log-ratios of microbial compositions with an overall R 2 = 0.88.
Going forward, we believe that the general log-contrast model and the associated optimization and model selection techniques presented here will provide a valuable off-the-shelf tool for log-contrast regression analysis when compositional data such as microbial relative abundance data are used as predictors in exploratory data analysis.Since ( 13) is equivalent to minimize w∈R N +p f(w) + g(Lw), we infer from (38) that Problem 1 admits at least one solution.Note that (39) can be rewritten as minimize  (46)
Problem 1 has at least one solution.Now let (s k ) k∈N and (b k ) k∈N be sequences generated by Algorithm 2. Then (s k ) k∈N converges to some s ∈ R N and (b k ) k∈N converges to some b ∈ R p such that (s, b) solves Problem 1.
t e x i t s h a 1 _ b a s e 6 4 = " g O 9 C U k 3 J t 8 P R 3 j D e n E 5 U R M / x H s M = " > A A A B 9 X i c Z Z B N S w J R F I b P t S + z L 6 t l G 8 m N g c h M B G 2 l g l p E W P g F K X L n e r T L 3 I 9 h 7 p 1 U p B / R t m g X b f s 9 / p v G m h Y 6 7 + r l e c 8 5 8 B 4 v E N x Y x 5 m R z M r q 2 v p G d j O 3 t b 2 z u 5 f f P 2 g a H Y U M G 0 w L H b Y 9 a l B w h Q 3 L r c B 2 E C K V n s C W 5 1 / O 8 9 Y z h o Z r V b e T A L u S D h U f c E Z t j F o d D y 3 t 8 V 6 + 6 F S c X x X S x k 1 M E R L V e v l Z p 6 9 Z J F F Z J q g x j 6 4 T 2 O 6 U h p Y z g S + 5 T m Q w o M y n Q 5 x S a S S 1 T y l o J t J b h P M x E y B b p O N I c a b 7 S 1 f H w o 5 t S G N o 0 E r K 1 U A r O 6 1 z i a Z w h 6 P C g 5 Z U / a f x 2 X l c u u J D b k 3 5 N i 6 v y t c h o n + S W o l / 4 S 4 3 T 5 v m a c W 5 a k 8 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " g O 9 C U k 3 J t 8 P R 3 j D e n E 5 U R M / x H s M = " > A A A B 9 X i c Z Z B N S w J R F I b P t S + z L 6 t l G 8 m N g c h M B G 2 l g l p E W P g F K X L n e r T L 3 I 9 h 7 p 1 U p B / R t m g X b f s 9 / p v G m h Y 6 7 + r l e c 8 5 8 B 4 v E N x Y x 5 m R z M r q 2 v p G d j O 3 t b 2 z u 5 f f P 2 g a H Y U M G 0 w L H b Y 9 a l B w h Q 3 L r c B 2 E C K V n s C W 5 1 / O 8 9 Y z h o Z r V b e T A L u S D h U f c E Z t j F o d D y 3 t 8 V 6 + 6 F S c X x X S x k 1 M E R L V e v l Z p 6 9 Z J F F Z J q g x j 6 4 T 2 O 6 U h p Y z g S + 5 T m Q w o M y n Q 5 x S a S S 1 T y l o J t J b h P M x E y B b p O N I c a b 7 S 1 f H w o 5 t S G N o 0 E r K 1 U A r O 6 1 z i a Z w h 6 P C g 5 Z U / a f x 2 X l c u u J D b k 3 5 N i 6 v y t c h o n + S W o l / 4 S 4 3 T 5 v m a c W 5 a k 8 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " g O 9 C U k 3 J t 8 P R 3 j D e n E 5 U R M / x H s M = " > A A A B 9 X i c Z Z B N S w J R F I b P t S + z L 6 t l G 8 m N g c h M B G 2 l g l p E W P g F K X L n e r T L 3 I 9 h 7 p 1 U p B / R t m g X b f s 9 / p v G m h Y 6 7 + r l e c 8 5 8 B 4 v E N x Y x 5 m R z M r q 2 v p G d j O 3 t b 2 z u 5 f f P 2 g a H Y U M G 0 w L H b Y 9 a l B w h Q 3 L r c B 2 E C K V n s C W 5 1 / O 8 9 Y z h o Z r V b e T A L u S D h U f c E Z t j F o d D y 3 t 8 V 6 + 6 F S c X x X S x k 1 M E R L V e v l Z p 6 9 Z J F F Z J q g x j 6 4 T 2 O 6 U h p Y z g S + 5 T m Q w o M y n Q 5 x S a S S 1 T y l o J t J b h P M x E y B b p O N I c a b 7 S 1 f H w o 5 t S G N o 0 E r K 1 U A r O 6 1 z i a Z w h 6 P C g 5 Z U / a f x 2 X l c u u J D b k 3 5 N i 6 v y t c h o n + S W o l / 4 S 4 3 T 5 v m a c W 5 a k 8 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " g O 9 C U k 3 J t 8 P R 3 j D e n E 5 U R M / x H s M = " > A A A B 9 X i c Z Z B N S w J R F I b P t S + z L 6 t l G 8 m N g c h M B G 2 l g l p E W P g F K X L n e r T L 3 I 9 h 7 p 1 U p B / R t m g X b f s 9 / p v G m h Y 6 7 + r l e c 8 5 8 B 4 v E N x Y x 5 m R z M r q 2 v p G d j O 3 t b 2 z u 5 f f P 2 g a H Y U M G 0 w L H b Y 9 a l B w h Q 3 L r c B 2 E C K V n s C W 5 1 / O 8 9 Y z h o Z r V b e T A L u S D h U f c E Z t j F o d D y 3 t 8 V 6 + 6 F S c X x X S x k 1 M E R L V e v l Z p 6 9 Z J F F Z J q g x j 6 4 T 2 O 6 U h p Y z g S + 5 T m Q w o M y n Q 5 x S a S S 1 T y l o J t J b h P M x E y B b p O N I c a b 7 S 1 f H w o 5 t S G N o 0 E r K 1 U A r O 6 1 z i a Z w h 6 P C g 5 Z U / a f x 2 X l c u u J D b k 3 5 N i 6 v y t c h o n + S W o l / 4 S 4 3 T 5 v m a c W 5 a k 8 0 = < / l a t e x i t > β i < l a t e x i t s h a 1 _ b a s e 6 4 = " g O 9 C U k 3 J t 8 P R 3 j D e n E 5 U R M / x H s M = " > A A A B 9 X i c Z Z B N S w J R F I b P t S + z L 6 t l G 8 m N g c h M B G 2 l g l p E W P g F K X L n e r T L 3 I 9 h 7 p 1 U p B / R t m g X b f s 9 / p v G m h Y 6 7 + r l e c 8 5 8 B 4 v E N x Y x 5 m R z M r q 2 v p G d j O 3 t b 2 z u 5 f f P 2 g a H Y U M G 0 w L H b Y 9 a l B w h Q 3 L r c B 2 E C K V n s C W 5 1 / O 8 9 Y z h o Z r V b e T A L u S D h U f c E Z t j F o d D y 3 t 8 V 6 + 6 F S c X x X S x k 1 M E R L V e v l Z p 6 9 Z J F F Z J q g x j 6 4 T 2 O 6 U h p Y z g S + 5 T m Q w o M y n Q 5 x S a S S 1 T y l o J t J b h P M x E y B b p O N I c a b 7 S 1 f H w o 5 t S G N o 0 E r K 1 U A r O 6 1 z i a Z w h 6 P C g 5 Z U / a f x 2 X l c u u J D b k 3 5 N i 6 v y t c h o n + S W o l / 4 S 4 3 T 5 v m a c WN / f 1 Z s X q R f C U L R 3 A M J X D h H K p w A z V o A A M f X u E N 3 s m I f J B P 8 v U 3 m i H J z i E s i H z / A K 5 a k 8 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " g O 9 C U k 3 J t 8 P R 3 j D e n E 5 U R M / x H s M = " > A A A B 9 X i c Z Z B N S w J R F I b P t S + z L 6 t l G 8 m N g c h M B G 2 l g l p E W P g F K X L n e r T L 3 I 9 h 7 p 1 U p B / R t m g X b f s 9 / p v G m h Y 6 7 + r l e c 8 5 8 B 4 v E N x Y x 5 m R z M r q 2 v p G d j O 3 t b 2 z u 5 f f P 2 g a H Y U M G 0 w L H b Y 9 a l B w h Q 3 L r c B 2 E C K V n s C W 5 1 / O 8 9 Y z h o Z r V b e T A L u S D h U f c E Z t j F o d D y 3 t 8 V 6 + 6 F S c X x X S x k 1 M E R L V e v l Z p 6 9 Z J F F Z J q g x j6 4 T 2 O 6 U h p Y z g S + 5 T m Q w o M y n Q 5 x S a S S 1 T y l o J t J b h P M x E y B b p O N I c a b 7 S 1 f H w o 5 t S G N o 0 E r K 1 U A r O 6 1 z i a Z w h 6 P C g 5 Z U / a f x 2 X l c u u J D b k 3 5 N i 6 v y t c h o n + S W o l / 4 S 4 3 T 5 v m a c W N / f 1 Z s X q R f C U L R 3 A M J X D h H K p w A z V o A A M f X u E N 3 s m I f J B P 8 v U 3 m i H J z i E s i H z / A K 5 a k 8 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " g O 9 C U k 3 J t 8 P R 3 j D e n E 5 U R M / x H s M = " > A A A B 9 X i c Z Z B N S w J R F I b P t S + z L 6 t l G 8 m N g c h M B G 2 l g l p E W P g F K X L n e r T L 3 I 9 h 7 p 1 U p B / R t m g X b f s 9 / p v G m h Y 6 7 + r l e c 8 5 8 B 4 v E N x Y x 5 m R z M r q 2 v p G d j O 3 t b 2 z u 5 f f P 2 g a H Y U M G 0 w L H b Y 9 a l B w h Q 3 L r c B 2 E C K V n s C W 5 1 / O 8 9 Y z h o Z r V b e T A L u S D h U f c E Z t j F o d D y 3 t 8 V 6 + 6 F S c X x X S x k 1 M E R L V e v l Z p 6 9 Z J F F Z J q g x j 6 4 T 2 O 6 U h p Y z g S + 5 T m Q w o M y n Q 5 x S a S S 1 T y l o J t J b h P M x E y B b p O N I c a b 7 S 1 f H w o 5 t S G N o 0 E r K 1 U A r O 6 1 z i a Z w h 6 P C g 5 Z U / a f x 2 X l c u u J D b k 3 5 N i 6 v y t c h o n + S W o l / 4 S 4 3 T 5 v m a c W N / f 1 Z s X q R f C U L R 3 A M J X D h H K p w A z V o A A M f X u E N 3 s m I f J B P 8 v U 3 m i H J z i E s i H z / A K 5 a k 8 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " g O 9 C U k 3 J t 8 P R 3 j D e n E 5 U R M / x H s M = " > A A A B 9 X i c Z Z B N S w J R F I b P t S + z L 6 t l G 8 m N g c h M B G 2 l g l p E W P g F K X L n e r T L 3 I 9 h 7 p 1 U p B / R t m g X b f s 9 / p v G m h Y 6 7 + r l e c 8 5 8 B 4 v E N x Y x 5 m R z M r q 2 v p G d j O 3 t b 2 z u 5 f f P 2 g a H Y U M G 0 w L H b Y 9 a l B w h Q 3 L r c B 2 E C K V n s C W 5 1 / O 8 9 Y z h o Z r V b e T A L u S D h U f c E Z t j F o d D y 3 t 8 V 6 + 6 F S c X x X S x k 1 M E R L V e v l Z p 6 9 Z J F F Z J q g x j 6 4 T 2 O 6 U h p Y z g S + 5 T m Q w o M y n Q 5 x S a S S 1 T y l o J t J b h P M x E y B b p O N I c a b 7 S 1 f H w o 5 t S G N o 0 E r K 1 U A r O 6 1 z i a Z w h 6 P C g 5 Z U / a f x 2 X l c u u J D b k 3 5 N i 6 v y t c h o n + S W o l / 4 S 4 3 T 5 v m a c W 5 a k 8 0 = < / l a t e x i t > β i < l a t e x i t s h a 1 _ b a s e 6 4 = " g O 9 C U k 3 J t 8 P R 3 j D e n E 5 U R M / x H s M = " > A A A B 9 X i c Z Z B N S w J R F I b P t S + z L 6 t l G 8 m N g c h M B G 2 l g l p E W P g F K X L n e r T L 3 I 9 h 7 p 1 U p B / R t m g X b f s 9 / p v G m h Y 6 7 + r l e c 8 5 8 B 4 v E N x Y x 5 m R z M r q 2 v p G d j O 3 t b 2 z u 5 f f P 2 g a H Y U M G 0 w L H b Y 9 a l B w h Q 3 L r c B 2 E C K V n s C W 5 1 / O 8 9 Y z h o Z r V b e T A L u S D h U f c E Z t j F o d D y 3 t 8 V 6 + 6 F S c X x X S x k 1 M E R L V e v l Z p 6 9 Z J F F Z J q g x j 6 4 T 2 O 6 U h p Y z g S + 5 T m Q w o M y n Q 5 x S a S S 1 T y l o J t J b h P M x E y B b p O N I c a b 7 S 1 f H w o 5 t S G N o 0 E r K 1 U A r O 6 1 z i a Z w h 6 P C g 5 Z U / a f x 2 X l c u u J D b k 3 5 N i 6 v y t c h o n + S W o l / 4 S 4 3 T 5 v m a c W 5 a k 8 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " g O 9 C U k 3 J t 8 P R 3 j D e n E 5 U R M / x H s M = " > A A A B 9 X i c Z Z B N S w J R F I b P t S + z L 6 t l G 8 m N g c h M B G 2 l g l p E W P g F K X L n e r T L 3 I 9 h 7 p 1 U p B / R t m g X b f s 9 / p v G m h Y 6 7 + r l e c 8 5 8 B 4 v E N x Y x 5 m R z M r q 2 v p G d j O 3 t b 2 z u 5 f f P 2 g a H Y U M G 0 w L H b Y 9 a l B w h Q 3 L r c B 2 E C K V n s C W 5 1 / O 8 9 Y z h o Z r V b e T A L u S D h U f c E Z t j F o d D y 3 t 8 V 6 + 6 F S c X x X S x k 1 M E R L V e v l Z p 6 9 Z J F F Z J q g x j 6 4 T 2 O 6 U h p Y z g S + 5 T m Q w o M y n Q 5 x S a S S 1 T y l o J t J b h P M x E y B b p O N I c a b 7 S 1 f H w o 5 t S G N o 0 E r K 1 U A r O 6 1 z i a Z w h 6 P C g 5 Z U / a f x 2 X l c u u J D b k 3 5 N i 6 v y t c h o n + S W o l / 4 S 4 3 T 5 v m a c W 5 a k 8 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " g O 9 C U k 3 J t 8 P R 3 j D e n E 5 U R M / x H s M = " > A A A B 9 X i c Z Z B N S w J R F I b P t S + z L 6 t l G 8 m N g c h M B G 2 l g l p E W P g F K X L n e r T L 3 I 9 h 7 p 1 U p B / R t m g X b f s 9 / p v G m h Y 6 7 + r l e c 8 5 8 B 4 v E N x Y x 5 m R z M r q 2 v p G d j O 3 t b 2 z u 5 f f P 2 g a H Y U M G 0 w L H b Y 9 a l B w h Q 3 L r c B 2 E C K V n s C W 5 1 / O 8 9 Y z h o Z r V b e T A L u S D h U f c E Z t j F o d D y 3 t 8 V 6 + 6 F S c X x X S x k 1 M E R L V e v l Z p 6 9 Z J F F Z J q g x j 6 4 T 2 O 6 U h p Y z g S + 5 T m Q w o M y n Q 5 x S a S S 1 T y l o J t J b h P M x E y B b p O N I c a b 7 S 1 f H w o 5 t S G N o 0 E r K 1 U A r O 6 1 z i a Z w h 6 P C g 5 Z U / a f x 2 X l c u u J D b k 3 5 N i 6 v y t c h o n + S W o l / 4 S 4 3 T 5 v m a c W 5 a k 8 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " g O 9 C U k 3 J t 8 P R 3 j D e n E 5 U R M / x H s M = " > A A A B 9 X i c Z Z B N S w J R F I b P t S + z L 6 t l G 8 m N g c h M B G 2 l g l p E W P g F K X L n e r T L 3 I 9 h 7 p 1 U p B / R t m g X b f s 9 / p v G m h Y 6 7 + r l e c 8 5 8 B 4 v E N x Y x 5 m R z M r q 2 v p G d j O 3 t b 2 z u 5 f f P 2 g a H Y U M G 0 w L H b Y 9 a l B w h Q 3 L r c B 2 E C K V n s C W 5 1 / O 8 9 Y z h o Z r V b e T A L u S D h U f c E Z t j F o d D y 3 t 8 V 6 + 6 F S c X x X S x k 1 M E R L V e v l Z p 6 9 Z J F F Z J q g x j 6 4 T 2 O 6 U h p Y z g S + 5 T m Q w o M y n Q 5 x S a S S 1 T y l o J t J b h P M x E y B b p O N I c a b 7 S 1 f H w o 5 t S G N o 0 E r K 1 U A r O 6 1 z i a Z w h 6 P C g 5 Z U / a f x 2 X l c u u J D b k 3 5 N i 6 v y t c h o n + S W o l / 4 S 4 3 T 5 v m a c W

Fig. 3 a
Fig. 3 a.Solution path of regression vector b the sparse Squares (LS) log-contrast model on the full COMBO data.The grey marks the theoretical λ 0 from (28).Solution path of regression vector b in sparse Huber log-contrast model on the full COMBO data.c.Solution path of the scale estimates σ for both log-contrast models on the full COMBO data.d.Comparison of the regression estimates of both models at regularization parameter λ 0 on the full data set.Both models agree on the two strongest predictors, the genera Clostridium and Acidaminococcus.

Fig. 4 a
Fig. 4 a.Stability selection profile for all taxa selected with a frequency > 0.1 in either the sparse Least Squares (LS, blue) or the Huber (red) log-contrast model.The green dashed line marks the stability threshold ts = 0.7, selecting the genera Clostridium and Acidaminococcus.b.Prediction of BMI from the log-contrast of the two genera in the Huber log-contrast model for inliers (blue) and outliers (red) (overall R 2 = 0.19).

Fig. 5 a
Fig. 5 a. Stability selection profiles for the subset of 45 taxa selected with a frequency > 0.1 in either the sparse Least Squares (LS, blue) or the Huber (red) log-contrast model with subcompositional constraints.b.Same as a. but without sub-compositional constraints.c.Stability selection frequency differences between the two approaches.Several genera show significant differences.The colors signify the different phyla that the genera belong to and the noncompositional covariates fat and diet intake.

Fig. 6 a
Fig. 6 a. Stability selection profile for all taxa selected with a frequency > 0.1 in either the sparse Least Squares (LS, blue) log-contrast model or the Huber model (red).The green dashed line marks the stability selection threshold ts = 0.7.b.Values of the seven selected log-contrast regression coefficients for LS (blue) and the Huber (red) model.c.Prediction of pH measurements from the Huber model for inliers (blue) and outliers (red) (R 2 = 0.88).d.Table summarizing the mean absolute deviation (MAD) of the two model estimates on the data.Numbers in parentheses represent the number of inlier and outlier data points under the Huber outlier model.