A new Universal Resample Stable Bootstrap-based Stopping Criterion in PLS Components Construction

We develop a new robust stopping criterion in Partial Least Squares Regressions (PLSR) components construction characterised by a high level of stability. This new criterion is defined as a universal one since it is suitable both for PLSR and its extension to Generalized Linear Regressions (PLSGLR). This criterion is based on a non-parametric bootstrap process and has to be computed algorithmically. It allows to test each successive components on a preset significant level alpha. In order to assess its performances and robustness with respect to different noise levels, we perform intensive datasets simulations, with a preset and known number of components to extract, both in the case n>p (n being the number of subjects and p the number of original predictors), and for datasets with n<p. We then use t-tests to compare the performance of our approach to some others classical criteria. The property of robustness is particularly tested through resampling processes on a real allelotyping dataset. Our conclusion is that our criterion presents also better global predictive performances, both in the PLSR and PLSGLR (Logistic and Poisson) frameworks.


Introduction
Modelling relations by performing usual linear regressions between a univariate response y = (y 1 , . . ., y n ) T ∈ R n×1 , (.) T representing the transpose, and highly correlated predictors X = (x 1 , . . ., x p ) ∈ R n×p , with n the number of subjects and p the number of predictors, or on datasets including more predictors than subjects, is not suitable or even possible.However, with the huge technological and computer science advances, providing consistent analysis of such kinds of datasets has become a major challenge, particularly in domains such as medicine, biology or chemistry.To deal with such datasets, statistical methods have been developed, especially the PLS Regression (PLSR) which was first introduced by Wold et al. (1983) and Wold et al. (1984) and described precisely by Höskuldsson (1988) and Wold et al. (2001).
PLSR consists in building K rk(X) orthogonal "latent" variables T K = (t 1 , . . ., t K ), also called components, in such a way that T K describes optimally the common information space between X and y.Thus, these components are build up as linear combinations of the original predictors vectors, in order to maximise Cov (y, t k |T k−1 ) so that: where w * k = (w 1k * , . . ., w pk * ) T is the vector of predictors weights, depending on y, in the k th component (Wold et al., 2001).
Let K be the number of components.The final regression model is thus the following one: β P LS j x j + , (1.4) with = ( 1 , . . ., n ) T the n by 1 error vector i , verifying E ( |T K ) = 0 n , Var ( |T K ) = σ 2 × Id n and (c 1 , . . ., c K ) the coefficients of regression of y on T K .This model is not a linear one since the hat matrix does not only depend on X but also on y.Thus, in almost all cases: w * jk x j . (1.5) An extension to Generalized Linear Regression models, noted PLSGLR, has been developed by Bastien et al. (2005), with the aim of taking into account the specific distribution of y.In this context, the regression model is the following one: with θ the conditional expected value of y for a continuous distribution or the probability vector of a discrete law with a finite support.The link function g depends on the distribution of y.
Since k rk (X) and Corr (t k 1 , t k 2 ) = 0, ∀ (k 1 , k 2 ) ∈ [[1, . . ., K]] 2 , issues concerning the ill-conditioned matrix X are solved, so that usual linear regressions can be used to estimate the c k parameters.However, the determination of the optimal number of components K, which is equal to the exact dimension of the link between X and y, is crucial to obtain correct estimations of the original predictors coefficients.Indeed, concluding K 1 < K leads to a loss of information so that links between some predictors and y will not be correctly modelled.Concluding K 2 > K involves that useless information in X will be used to model knowledge in y, which leads to overfitting.
In this article, we will first remind the reader about the most used existing criteria and introduce our adaptation of the so called bootstrapping pairs to the PLSR and PLSGLR as a new stopping criterion in part 2. We will also describe the global algorithm we developed to compare these different criteria through intensive datasets simulations.The aim is to analyse results we obtain with these different criteria depending on some increasing noise levels we insert both in X, through an additional component, and in y by adding a supplementary centred variable.Indeed, it is a common situation in domains such as chemistry or medicine, especially in allelotyping datasets, leading to important issues concerning the extraction of a reliable number of components.In part 3, we will analyse the results we obtain in the PLSR framework before analysing PLSGLR results for the Logistic Regressions (PLS-LR) and the Poisson Regression (PLS-PR) in part 4. Results are treated for both n > p and n < p cases.In part 5, we will treat a real allelotyping dataset by comparing the number of components extracted by the different applicable criteria.We will also compare the robustness of our new bootstrap-based criterion through resampling processes, by approximating the distribution of these extracted number of components.Finally, in part 6, we will discuss the observed advantages and defaults of each criterion.

Existing criteria used for comparison
• In PLSR: 1. Q 2 .This criterion is obtained by Cross-Validation (CV).We decide to check results either by a leave-one-out CV, i.e. q = n with q the number of parts the dataset is divided into, and with q chosen equal to five (5-CV) according to results obtained by Hastie et al. (2009) and Kohavi (1995).For each new component t k , the following value is calculated: Results linked to these two criteria will be graphically and respectively reported as Q2lv1o and Q2K5. 2. BICdof .As mentionned in the introduction, the hat matrix H K does not only depend on X, but also on y, so that corrected degrees of freedom (dof ) have to be used in the expression of the BIC criterion (Schwarz, 1978).Krämer and Sugiyama (2011) define this dof correction in the PLSR framework (without missing data) and apply it to the BIC criterion.We used the R package plsdof, based on Krämer and Sugiyama (2011) work, where this criterion is implemented as follow: where γ is the degree of freedom (dof ) of the model (1.4) and σ 2 is defined by Krämer and Sugiyama (2011).The selected model is the one which realize the first local minimum of this BICdof criterion.We also extract results by retaining models realizing the global minimum and will refer to them as BICglob.
• In PLSGLR: 1. AIC.The AIC criterion (Akaike, 1974) can be computed, whatever is the distribution involved.However, no corrected dof have been determined in this PLSGLR framework.2. BIC.As for the AIC, the BIC is calculable without corrected dof. 3. CV − MClassed.This criterion could only be used for PLS-LR.
Through a 5-CV, it determines for each model the number of predicted miss-classed values.The selected model is the one linked to the minimal value of this criterion.4. p val. Bastien et al. (2005) define a new component t k as nonsignificant if there is not any significant predictors within it.An asymptotic Wald test is used to conclude to the significance of the different predictors.

Bootstrap based criterion 2.2.1. Motivations, assumption and definitions
All the criteria described just above have major flaws including arbitrary bounds dependency, results based on asymptotic laws or derived from q-CV which naturally depends on the value of q and on the way the group will be randomly composed.For this purpose, we adapted a non-parametric bootstrap technique in order to test directly, with some confidence level (1 − α), the significance of the different coefficients c k by extracting confidence intervals (CI) for each of them.
By maximizing Cov (y, t k |T k−1 ), each new component t k represents the best choice of a new dimensional space direction, extracted from the remaining information, explaining at best y − E (y |T k−1 ).This process implies the following result.
Proposition 2.1.Let y (k−1) and X (k−1) respectively be the deflated response vector and predictors matrix on k − 1 components.Suppose that, x T i,(k−1) y (k−1) = 0, then the PLS components building process implies that: ∀k ∈ [[1, K]], c k > 0 and, conditionally to X, c k is linked to a positive distribution.
Bootstrapping pairs (y, X) in order to test the significance of these parameters is not suitable since it will approach the conditional distribution given X.Thus, to test each new component, we approach the conditional distribution of these coefficients given T k by bootstrapping pairs (y, T k ).
We define the significance of a new component as resulting from its significance for both y and X, so that the extracted number of component k is defined as the last one which is significant for both of them.

Adapted bootstrapping pairs as a new stopping criterion
The so-called bootstrapping pairs was introduced by Freedman (1981).This technique only relies on the assumption that the originals pairs (y i , t i• ), where t i• represents the i th row of T k , are randomly sampled from some unknown (k + 1)-dimensional distribution.This technique was developed to treat the so-called correlation models, in which predictors are considered as random and may be related to them.Thus, it is appropriate to "estimate the regression plane for a certain population on the basis of a simple random sample" (Freedman, 1981(Freedman, , p.1219)).Based on our definition of the significance of a new component, we adapted this method by designing a double bootstrapping pairs algorithmic implementation.The first step consists in bootstrapping pairs (X, T k ), leading to a maximal number k max of components which can be extracted.The second step consists in bootstrapping pairs (y, T k ) to test the significance of each successive component t k , with k k max .To avoid confusions between the number of predictors and the coefficients of the regressions of X on T k , we set m as the total number of predictors.The algorithm of this double bootstrapping pairs implementation is designed as follow : • Bootstrapping (X, T k ): let k = 1 and l = 1, . . ., m.
3. For each pair (y, T k ) br , do the OLS regression: , and return to step 1. Else, the final extracted number of components is Results linked to this bootstrap-based criterion will be graphically reported as BootYT.

Illustration of CV issues: first applications on real datasets
As mentioned by Boulesteix (2014), important issues concerning the stability of the q-fold CV procedure for the choice of tuning parameters, here the number of components, have been observed.These issues are directly induced by the value of q and by the random character of this resamplingbased procedure while splitting the original dataset into two distinct sets, a training one and a test one.To illustrate consequences on the tuning parameter, we treated two real datasets.The first dataset was collected on patients carrying a colon adenocarcinoma.It has 104 observations on 33 binary qualitative explanatory variables and one binary response variable representing the cancer stage according to the Astler-Coller (AB vs CD) classification (Astler and Coller, 1954).This binary response lead us to perform PLS-Logistic regressions.This dataset, named aze compl, is available in the R package plsRglm (Bertrand et al., 2014).We performed a 100 times the determination of the number of components using the CV-MClassed criterion with q ∈ {3, 5, 10, 15, 30}.Then, we performed the same process by using our new bootstrap-based criterion.Results are reported in Fig. 1. Results obtained through q-fold CV, with q = n, and displayed on Fig. 1 are typical examples of these issues.Depending on the choice of q and on the way the different fold are split, the extracted number of components could be dramatically different.In addition, obtaining a complete distribution of the number of components is mostly impossible, due to the high number of different possibilities for splitting the original datasets into q groups.Proposition 2.2.Let n = pq + r, 0 r q − 1 be the euclidean division of n by q.Then, the number of distinct partitions of the original dataset into r (p + 1)-element subsets and (q − r) p-element subsets for a CV does not depend on the order of their placement, and is equal to: The leave-one-out CV, which is the only complete CV (f (n, n) = 1, ie only one possibility for choosing n folds out of n subjects), concludes to one component to extract.However, it suffers from variance issues concerning the bias-variance tradeoff on the estimation of the prediction error (Hastie et al., 2009) (Kohavi, 1995).Our new bootstrap-based criterion is more stable on this dataset and allows the user to choose the number of components, in this case three, through a more accurate process.
The second example is a benchmark dataset, called "Processionnaire du Pin", which is treated in depth by Tenenhaus (1998).It has 33 observation on 10 explanatory variables and is also available in the R package plsRglm under the name pine.More details on this dataset are available in (Tenenhaus, 1998).The same process was applied on this second practical case, with usual PLS regressions.Thus, we compare the Q 2 criterion obtained through q-fold CV, q ∈ {2, 3, 5, 10, 15}, and our new bootstrap-based criterion.The Q 2 criterion obtained through leave-one-out CV concludes to one significant component.Others results are displayed in Fig. 2.
In this case, the q-folds CV does not suffer from stability issues, as those seen just before, since the Q 2 criterion is much more stable than the minimisation of the number of miss-classed values.However, extracting one component is not recommended.Tenenhaus (1998), after a complete analysis of this dataset, proves that four components is the best decision.This under-estimating issue linked to the Q 2 criterion will be verified and confirmed in part 3.1, by treating results obtained through datasets simulations.Thus, while the Q 2 criterion under-estimates this optimal number of components, our new bootstrap-based criterion concludes to four components in more than 80% of results.

Simulation plan
To compare these different criteria, datasets simulations have been performed by adapting the simul data UniYX function, available in the R package plsRglm.
Simulations were performed to obtain a three dimensions common space between X and y, leading to an optimal number of component equal to three.These three components own standard deviations respectively equal to σ 1 = 10, σ 2 = 8 and σ 3 = 6.Simulations were performed under two different cases, both in PLSR and PLSGLR framework.The first one is the n > p situation with n = 200 and p ∈ Ω 200 = [[7, 50]].The second one is the n < p situation where n = 20 and p ∈ Ω 20 = [[25, 50]].For each fixed couple (σ 4 , σ 5 ), which respectively represents the standard deviation owned by the useless fourth component present in X and the random noise standard deviation in y, we simulated 100 datasets with p l predictors, l = 1, . . ., 100, obtained by sampling with replacement in Ω n .The X matrices are so built around four orthogonal components.Finally, the number of bootstrap samples was fixed to R = 500.

PLSR results
As mentioned in part 2.3, the optimal number of components linked to these simulated datasets is equal to three.A four components model will thus be too complex, even if the estimated response will not significantly change since Corr (y, t 4 ) 0. Indeed, this fourth component only improves the representation of the original predictors by modelling useless informations in X, but is not helpful for a better estimation of y.All supplementary component is built up with some random noise present in X in order to explain remaining knowledge in y.

PLSR: Case n > p
For this framework, we consider the following values for noises standard deviations: We extract row means for each studied criteria (except for BootYT), and report them in Fig. 3 and 4 as functions of σ 4 and σ 5 .Based on results mapped out in Fig. 3, the BICglob criterion has some defaults concerning the stability of its results.This issue is mainly due to the non necessarily increase of the adapted dof as a function of the number of components.As a direct consequence, adding a component sometimes lead to smaller dof than the previous model ones which leads this criterion to overestimating issues with huge variability levels.Thus, the BICglob criterion has to be avoided.
Extracting the optimal number of components K as the one which is linked to the model realising the first local minimum of this adapted criterion, permits to focus the comparison on the first models, which are mainly linked to increasing dof.Thus, issues linked to the BICglob criterion are solved and results displayed in Fig. 3, concerning the BICdof criterion, matched to the expected values.The increase of the displayed response surface for the highest values of σ 4 will be treated while analysing the entire response dataset.Based on results displayed in Fig. 4, the influence of the value of q is negligible on this Q 2 criterion so that we will only discuss about results linked to the Q2K5 criterion, as recommended by Hastie et al. (2009).This criterion is very sensitive to the increasing noise level in y so that it globally underestimates the number of components.Tenenhaus (1998) mentioned this underestimating issue linked to this criterion while analysing the dataset introduced in part 2.2.3.The fact that this criterion performs an expected and optimal estimation of the number of component only for σ 5 0 is much more harmful for this benchmark criterion.Indeed, real datasets, especially in targets areas as mentioned during the introduction, are never noiseless.
We then display three graphical representations of all the 2255 available row means linked to the Q2K5, BICdof and BootYT criteria in Fig. 5.Each row variances were also extracted and graphically reported in Fig. 6.Considering the extracted number of components as a discriminant factor, we conclude that the Q 2 criterion is the less efficient criterion by being the most sensitive one to the increasing value of σ 5 so that it globally underestimates the number of components.Comparing BICdof and BootYT, or advertising one of them is quite difficult in this large n case.BICdof has a low computational runtime and is the less sensitive one to the increasing value of σ 5 by retaining 86.37% of results equal to three or four components.However, referring to Fig. 6, the variability of results linked to the BICdof is globally higher than the one linked to our new bootstrap based criterion, especially on datasets with large values of σ 4 σ 4 > 3 i=1 σ 2 i = √ 200 14.14 .In this particular case, our new bootstrap-based criterion keeps its stability while the median of BICdof results, for instance, is more than tripled (0.25 to 0.79) compared to the one linked to all data.Moreover, the BootYT is more ro-bust than the BICdof to the increasing noise level in X in terms of extracted number of components.Referring to Fig. 7, the BICdof suffers from overestimating issues.Moreover, based on results display on Fig. 8, it returns results linked to out of range values of variance compared to the two others criteria.These two issues, which are mainly due to the extraction of 1678 (5.847%) results equal to 19 components, added to its its lack of robustness to the increasing noise level σ 5 , lead us to conclude that this criterion should be avoid in this n < p framework.
Our new boostrap-based criterion underestimates the number of components but own an attractive robustness to the increasing noise level in y so that it returns results between 1.2 and 2.2 in average.Moreover, it returns results with low variability for fixed couple (σ 4 , σ 5 ), contrary to the BICdof criterion.The Q2K5 criterion has a comparable attractive feature of stability but is less robust to noise level in y than our new bootstrap based criterion, linking the Q2K5 to globally more important underestimating issues.
So, by considering the number of extracted components as a discriminant factor, we conclude that the BootYT criterion is the best one to deal with these n < p datasets.

MSE analysis
We then assess the predictive performances of each of these three criteria (BootYT, BICdof and Q2K5).Thus, for each of the 28 700 simulated datasets in this case, we simulated 80 more observations as test points.We computed both training and testing Normalized Mean Square Error (NMSE).The normalisation was done by dividing the training or testing MSE of the obtained model with the corresponding MSE linked to the trivial one (constant model equal to the mean of the training data).Furthermore, as mentioned by Krämer and Sugiyama (2011, p.702), "the large test sample size ensures a reliable estimation of the test error." We treat these predictive results for each couple of values (σ 4 , σ 5 ) by testing the equalities of NMSE means through asymptotic t-tests with Welch-Satterthwaite dof approximation (Welch, 1947).All these tests were performed on level α = 0.05.Results of these t-tests are graphically reported in Fig. 9.
Concerning BootYT vs Q2K5, the Q2K5 has a better predictive ability for some very low values of σ 5 .This result is not surprising since, in this case, the Q2K5 criterion returns numbers of components closer to three than BootYT does (Fig. 7).However, tests results between the BICdof and the Q2K5 criterion are not concluding to a significant better predictive performance of the Q2K5 criterion for small values of σ 5 despite the BICdof globally overestimates the number of components in this case (Fig. 7).In fact, due to the small values of σ 5 and despite the bad estimations of the number of components returned by the BICdof criterion by extracting more than three or four components, testing NMSE react in the same way than the training ones i.e. the higher the extracted number of components is, the lower the predictive NMSE are.This fact lead us to only focus on the extracted number of components when σ 5 0, leading the Q2K5 criterion to be the best one.
Finally, in all others cases, the BootYT criterion returns models with, at least, comparable or better predictive performances than the two others.

PLSR: Conclusion
In the n > p case, the BootYT criterion offers a better robustness to noise in y than the Q2K5.It is also more robust to the increasing noise level in X than the BICdof, which moreover has some variance issues for high values of σ 4 .We also conclude the BootYT criterion as a good compromise between the two others criteria, owning their advantages without their drawbacks.Concerning the n < p case, our bootstrap-based criterion is less sensitive than the others to the increasing noise level in y and is linked to low variance results, leading it to have better predictive performances on datasets with a non-negligible noise level in y.

PLSGLR results
In this section, we present results on the comparison of our bootstrapbased criterion with 4 other criteria (AIC, BIC, CV-MClassed and p val, see part 2.1).Note that, in this framework, due to the specific distribution of y and link-function g we chose, the increase of σ 5 does not lead to a linear increase of noise level in y, as it did in datasets simulations for PLSR analysis.

PLS-LR results
In this case, the increasing value of σ 5 does not influence the extracted number of components as much as it did during the PLS analysis.Indeed, due to the used link function: inv.logit : R −→ ]0, 1[ the increasing value of σ 5 does not imply an increase of the response vector variance, which belong to [0; 0.25], so that the decrease of significant components is not implied.
Finally, the bijectivity of the inv.logit function insures the presence of three common components between X and y.

PLS-LR: Case n > p
In this case, we consider σ 4 ∈ {0.01, 0.51, . . ., 9.51} and σ 5 ∈ {0.01, 0.51, . . ., 15.51} leading to 640 different couples (σ 4 , σ 5 ).Results are so stored in five tables of dimension 640×100.We graphically report CV-MClassed, p val and BootYT row means as functions of σ 4 and σ 5 as well as boxplots of their row variances in Fig. 10.Based on these graphics, the CV-MClassed performs well in estimating the optimal number of components in average.However, this good property has to be nuanced by the high variances linked to its results and which lead this criterion to be used with caution.The BootYT and p val criteria return similar results in this n > p case.Both of them slightly underestimate the optimal number of components but with the advantage of low variances of their results.
We also observed results obtained with the non-corrected AIC and BIC criteria and display these results in Fig. 11.The non-corrected dof lead the AIC and BIC criteria to globally overestimate the number of components.Thus, these criteria have to be avoided until the development of a dof correction in this PLSGLR framework and will not be considered in the n < p case.Note that we only compared AIC and BIC values linked to the k-components models with k 7 and retained the one which realize the minimum of the studied criterion.

PLS-LR: Case n < p
In this case, we consider σ 4 ∈ {0.01, 0.51, . . ., 9.51} and σ 5 ∈ {0.01, 0.51, . . ., 9.51} leading to 400 different couples (σ 4 , σ 5 ).Results are so stored in three tables of dimension 400 × 100.We set the maximal value of σ 5 to 9.51, and not to 15.51 as for the n > p case, in order to save computational runtime since an increasing value of σ 5 does not affect the choice of the number of extracted components.
We graphically report row means as a function of σ 4 and σ 5 as well as boxplots of row variances in Fig. 12.
The CV-MClassed criterion maintains the same property of well estimating in average and issue of variability as in the n > p framework.Concerning the two others criteria, we observed a higher underestimating issue linked to

PLS-LR: MSE and miss-classed values analysis
In order to test their predictive performances, we simulated 80 more observations for each simulated datasets (40 000), and computed the testing NMSE linked to each models established by the three criteria.Furthermore, since the binary response obtained by the model is equal to 1 if the estimated response is over 0.5, 0 if not, returning higher NMSE does not necessarily lead to higher number of miss-classed values.Thus, we also computed the number of predictive miss-classed values (M classed ) for each of these three criteria.
In order to compare their predictive performances, t-tests were computed for each fixed values of (σ 4 , σ 5 ) so that we can observe precisely which criterion has better performances depending on the noise level in X and y.Results of these tests are graphically reported in Fig. 13.
The bootstrap-based criterion is never less efficient than the others criteria.If there is globally no significant differences between bootstrapping pairs and the p val criterion concerning the predictive NMSE, BootYT is better than this criterion concerning the predictive number of miss-classed values.Then, there is only few cases where bootstrapping pairs is significantly better than the CV-MClassed criterion concerning the predictive number of missclassed values.But, concerning the predictive NMSE, the BootYT criterion is better than this last one by returning significant smallest NMSE values, especially for high values of σ 5 .
The boostrap-based criterion is also the best one by having, at least,  similar predictive performances compared to the two others.

PLS-LR: Conclusion
Through these simulations, we can reasonably assume that the bootstrapbased criterion is globally more efficient than the others ones.In the n > p case, it offers a similar stability compared to the p val criterion.However, it globally underestimates the optimal number of components when the CV-MClassed criterion retains it on average but with high variabilities.Concerning the n < p case, the BootYT criterion has better predictive performances than the two others studied criteria in terms of predictive NMSE and predictive miss-classed values.It also keeps a quite low variability, which is really important for a future routine implementation.Finally, the AIC and BIC are clearly not adapted since the corrected dof are not established yet.

PLS-PR: Row means analysis
Results were stored in four tables of dimension 440 × 100 in the n > p case and 360 × 100 in the n < p framework.Each of the 240 first rows correspond, in both cases, to results for a fixed couples of values (σ 4 , σ 5 ), with σ 4 ∈ {0.01, 0.51, . . ., 9.51} and σ 5 ∈ {0.01, 0.21, . . ., 2.21}.They so correspond to results linked to simulations with a noise parameter σ 5 globally lower than the available information standard deviation we approached and which is so approximatively equal to 1.727.
AIC, BIC, p val and BootYT row means are displayed as functions of σ 4 and σ 5 in Fig. 14.Except for the bootstrap-based criterion, all the criteria return an increasing number of components as σ 5 increases.These results lead us to conclude that our new bootstrap-based stopping criterion is the only one which is adapted to a Poisson distribution.Based on these graphical representations, no additional analysis of these numbers of components was done.Moreover, we decided to only compare predictive performances of both p val and BootYT criteria.

PLS-PR: MSE analysis
We extracted the training log(MSE) as well as the testing NMSE, based on 80 supplementary simulated data, in the n < p case and displayed their means for each value of σ 5 in Fig. 15.The global decrease of the log(MSE) linked to the p val models confirms that this criterion will model the random noise in y.On the contrary, the bootstrap based criterion returns a regular increase of the log(MSE), which empirically proves that it better succeeds in separating the real common information from the random noise.The overfitting issue linked to the p val criterion has some major consequences on its predictive ability and lead so to higher NMSE than the BootYT ones.Furthermore, the BootYT criterion is more robust than the p val one to the increasing noise level in y.It returns consistent testing NMSE for datasets linked to value of σ 5 lower than the common information standard deviation, i.e σ 5 < 1.727, and even returns quite reasonable testing NMSE means up to σ 5 = 2.51, while the p val criterion returns out of range testing NMSE for σ 5 1.01.
We then displayed the evolution of testing NMSE variances in Fig. 16.Results obtained by the bootstrap based criterion are linked to acceptable variances while σ 5 < 1.727, so that we can postulate that while the random noise standard deviation is lower than the real common information one, this new criterion returns stable results.To the contrary, the p val models are linked to high variability of their testing NMSE.This fact is due to their over-complexities we described in part 4.2.1.
However, these out-of-range variances lead also to non-significant means differences while using t-tests on these datasets although means differences are notable, based on results displayed in Fig. 15.
To obtain consistent t-tests outcomes to compare the predictive ability of these criteria, we had to use results we obtained in the n > p case.We so simulated 100 supplementaries subjects as testing sets.The evolution of means and variances of testing NMSE for both bootstrap based criterion and p val criterion are reported in Fig. 17   Based on these graphics, we have to notice that models build up with the bootstrap-based criterion are on average better than the trivial ones for σ 5 2.51 while p val models become worse than these trivial ones for σ 5 > 1.81 (Fig. 17).Thus, p val fails to construct better models than the trivial ones when the noise level in y is higher than the common information standard deviation.Then, both criteria return low variances of their testing NMSE for σ 5 3.01 so that t-tests return consistent outcomes on this range of values.Results of these t-tests are displayed in Fig. 18.
Based on these t-tests results, we conclude that to set up a predictive model on a dataset with significant random noise in it, our new boostrap based criterion is the one which should be advised.Note that non-significant differences for σ 5 3.51 are due to the high increase of variances linked to the p val results (see Fig. 17).

PLS-PR: Conclusion
In the case of response vector y linked to a Poisson distribution with a significant random noise in it, the bootstrap based criterion stands out as the only one which should be used.Indeed, others could be interpreted as increasing functions of σ 5 , so that they will model the random noise in y, leading to overfitting issues.As a direct consequence, they return models with poor predictive abilities compared to the ones our new criterion build up.

Application to a real dataset
In this section, we focus on a allelotyping study.Our method is applied on a dataset that concerns 267 subjects with colon cancer.Measures were done on 33 microsatellites in search of an allelic imbalance that indicates an abnormal number of allele copies of a nearby gene of interest.The aim of the study was to find the microsatellites subsets that would best discriminate left and right colon tumors.Thus, the univariate response corresponds to the original localisation of a colon tumor, leading to a binary response y, taking value 0 (respectively 1) if the localisation was on the right colon (respectively on the left).More details on it are available in Weber et al. (2007).This dataset contains missing values, so that we did a preprocessing in order to complete it by using the R package mice.As y is a 0-1 response, we used the three following stopping criteria in components construction: our new bootstrap-based criterion, the CV-MClassed and the p val ones.The corresponding extracted numbers of components are respectively equal to 6, 5 and 4. The bootstrap-based criterion selects more components than the CV-MClassed one, which does not match to what we observed in the simulations (see Fig. 10).In fact, results displayed in Fig. 10 are obtained by computing the means of extracted number of components obtained on the 100 simulated datasets for each fixed couple (σ 4 , σ 5 ).As we mentioned in part 4.1, results based on the CV-MClassed criterion have a high variability.To take this variability into account, we computed the CV-MClassed criterion 100 times on our dataset, leading to the distribution of the extracted number of components reported in Fig. 19.Then, by computing the mean of the 100 values of extracted numbers of components, we obtained 7.99, a result similar to the simulation ones.Thus, our simulation results are validated on this real case study.We also perform the same process using our new bootstrap based criterion and report the results in Fig. 19.Based on distributions in Fig. 19, the major default of the CV-MClassed criterion, namely the dependency of the extracted number of components to the way the group will be randomly formed, is clearly showed out.Thus, performing a single CV to find the number of components using this criterion must be avoided.As expected, the BootYT criterion returns stable results and conclude, in almost 80% of cases, in 6 components.
We also test the robustness of these three criteria through a bootstrap resampling process, with 100 boostrap iterations, and a jackknife one.These two resampling methods leads to distributions of the extracted number of components linked to each of the three compared criteria.Results are graphically reported in Fig. 20.These results confirm the high resampling robustness of our new bootstrapbased criterion compared to the CV-MClassed one.The p val criterion owns a comparable robustness but, based on our simulation results, with a higher bias.Based on these results and our simulation ones, we can reasonably conclude that for this dataset, the optimal number of components is 6.

Discussion
We developed a new bootstrap based stopping criterion in PLS components construction, characterised by a high level of stability and robustness with respect to noise levels compared to others criteria usually used.
This bootstrap based criterion has a shortcoming in that its computational runtime is higher than the others since it requires (k × ((p l + 1) × R)) least squares regressions per dataset.A first improvement has already be done by developing a parallel processing for this criterion.Furthermore, the development of corrected dof in PLSGLR framework would also permit to develop a corrected BIC formulation in this framework.This corrected BIC could provide an interesting alternative to the boostrap-based criterion since it could save an important computational runtime conditionally to the fact Let us first determine the number of distinct partitions of {1, . . ., n} which can be written as: {A 1 , . . ., A q 1 −1 , A q 1 , A q 1 +1 , . . ., A qr−1 , A qr , A qr+1 , . . ., A q } (A.1) To lighten the formulas, let us set ω = {q j ∈ E |q j − q j−1 = 1} and m j = q j − q j−1 − 1.Then, by knowing that, for any set containing n elements, the number of distinct p-element subsets of it that can be formed is given by n p , we obtain that the number of distinct partitions of the form (A.1) is equal to: (n − (q j−1 + i − 1) p − (j − 1))! p! (n − (q j−1 + (i + 1) − 1) p − (j − 1))! + 1 {q j / ∈ω} × (n − (q j − 1) p − (j − 1))! (p + 1)! (n − ((q j + 1) − 1) p − ((j + 1) − 1))! × 1 {qr =q} q−qr i=1 (n − (q r + (i − 1)) p − r)! p! (n − (q r + (i + 1) − 1) p − r)! + 1 {qr=q} Residual Sum of Squares of the k−1 components model and P RESS k the PREdictive residual Sum of Squares, calculated by CV, of the k components model.Tenenhaus (1998) considers that a new component t k improve significantly the prediction of y if:

Figure 8 :
Figure 8: n < p; Left: Boxplots of row variances; Right: Evolution of row variances Figure 10: n > p; Left: Row means surfaces; Right: Boxplots of row variances

Figure 11 :
Figure 11: n > p; From top to bottom: AIC row means surface.BIC row means surface.
Figure 12: n < p; Left: Row means surfaces; Right: Boxplots of row variances

Figure 14 :
Figure 14: From top to bottom: AIC, BIC, p val and BootYT row means surfaces; Left: n > p case; Right: n < p case

Figure 15 :
Figure 15: n < p; Left: Evolution of training log(MSE) means; Right: Evolution of testing NMSE means

Figure 17 :
Figure 17: n > p; Left: Evolution of testing NMSE variances; Center: Evolution of testing NMSE means on all data; Right: Evolution of testing NMSE means for σ 5 2.51

Figure 19 :
Figure19: Distribution of the extracted number of components with the q = 5 CV-MClassed criterion (Left) andBootYT (Right)