Robust signal dimension estimation via SURE

The estimation of signal dimension under heavy-tailed latent factor models is studied. As a primary contribution, robust extensions of an earlier estimator based on Gaussian Stein's unbiased risk estimation are proposed. These novel extensions are based on the framework of elliptical distributions and robust scatter matrices. Extensive simulation studies are conducted in order to compare the novel methods with several well-known competitors in both estimation accuracy and computational speed. The novel methods are applied to a financial asset return data set.


Introduction 1.Premise
Modern data sets exhibit increasingly large sizes and often the first step in analysing them is the implementation of suitable dimension reduction procedures, for example, principal component analysis (PCA).A fundamental question pertaining to any such reduction is the choosing of the correct amount of latent components to retain: underestimating their amount leads to losing important information, whereas picking too many components inevitably leads in the later stages of the analysis to modelling of noise and increased computational burden that could have been avoided with a more careful choice of the dimension.
Besides being large in size and volume, in many applications, such as economics and finance, it is common for the data sets to display heavier tails than possessed by the Gaussian distribution.Consider, for example, stock market returns which are often modelled with distributions having infinite variance [2].In such cases, the estimation of the latent dimension in dimension reduction is further complicated as one cannot rely any more on the covariance matrix, on whose eigenvalues many of the standard dimension estimation methods rely (a review is given later in this section).
With the above scenario in our mind, the objective of the current paper is to study and develop dimension estimation in the context of multivariate data sets exhibiting arbitrarily heavy tails.While our treatment is not high-dimensional in the usual sense, we still put a special emphasis on obtaining a method that is computationally scalable also to data with the number of variables in hundreds rather than in tens as such dimensionalities are very common in the applications listed earlier.We will base our theoretical framework on the concepts of ellipitical family and Stein's unbiased risk estimation, elaborated in more detail next.

Elliptical latent factor model
We assume that our data x 1 , . . ., x n is an i.i.d.sample of p-variate vectors generated from the elliptical latent factor model where µ ∈ R p , V ∈ R p×p is an orthogonal matrix, z i obeys a spherical distribution [8], i.e., z i ∼ Oz i for any orthogonal matrix O ∈ R p×p , and D ∈ R p×p is a diagonal matrix with the diagonal elements Conceptually, the model says that the observed x i are obtained by mixing the principal components Dz i with the matrix V and by applying a location shift µ.The final p − d principal components in Dz i are orthogonally invariant, meaning that they are essentially "structureless" and, as is typical, we view them as noise.Thus the main objective in the model ( 1) is to estimate the latent signals, i.e., the first d elements of Dz i along with their number d.
The model (1) gets still more intuitive form in the special case where z i obeys the standard Gaussian distribution, the most well-known example of an elliptical distribution.In this case model (1) reduces to where the loading matrix V 0 ∈ R p×d contains the d first columns of V as its columns, . ., σ 2 d − σ 2 )}, and ε i ∼ N p (0, σ 2 I p ). Model (2) reveals that in the Gaussian case the ddimensional signal residing in the column space of V 0 is explicitly corrupted with the noise vectors ε i .Note that this additive representation does not apply to any other elliptical distribution.The standard method of extracting the factors z i (or the corresponding subspace) in the Gaussian model ( 2) is through PCA.Namely, one computes the first d eigenvectors of the covariance matrix of x i and projects the observations onto their span.However, the success of this procedure hinges crucially on the knowledge of the latent dimension d < p, usually unknown to us in practice.As the misspecification of the dimension has ill consequences in practice (either missing part of the signal or riddling our estimates of the latent factors with noise), an important part of solving the factor model (2) is the accurate estimation of d.We next review a particular method for accomplishing this under the model (2), on which our subsequent developments are also based.

Stein's unbiased risk estimate
In [31], the latent dimension d in Gaussian PCA was estimated through the application of the Stein's unbiased risk estimate (SURE), a general technique for determining the optimal values of tuning parameters (of which the latent dimension d is an example) of estimation procedures.
Ignoring the model (2) for a moment, we briefly recall the basic idea behind the SURE: Assume that we observe the i.i.d.univariate w 1 , . . ., w n , generated as w i = a i + e i , where a i ∈ R are constant and the errors satisfy e i ∼ N (0, τ 2 ) for some τ 2 > 0. Let further to each a i correspond an estimator âi (w), viewed as a (differentiable) function of the data w = (w 1 , . . ., w n ).Then, the SURE R corresponding to the estimators a i is defined as In their celebrated paper [24], Stein proved that showing that R is an unbiased estimator of the risk associated with the estimators âi , in the complete absence of the true means a i .A multivariate version of SURE was in [31] adapted to the PCA model (2) (conditional on the y i ) to estimate the expected risk associated with any particular choice of the latent dimension d.The estimate of d is then chosen to be the dimensionality for which the risk is minimized.See also the earlier work [30].
The obtained estimator was in [31] shown to be highly successful under the Gaussian model (2).However, it is clear that the estimator cannot obtain the same level of efficiency under the wider elliptical model (1).There are two reasons for this: (i) the SURE-criterion was in [31] derived strictly under the Gaussian assumption and, more importantly, (ii) many standard elliptical distributions (e.g., multivariate Cauchy distribution) do not have enough finite moments so that the covariance matrix, on which the SURE-criterion is based, would even exist.Hence, the estimator of [31] is not even necessarily well-defined under the elliptical model (1) (on the population level)!

The scope of the current work
The primary objective of the current work is to provide a workaround for the previous issue by deriving a robust version of the SURE-criterion that allows for effective dimension estimation under the elliptical model (1) in the presence of heavy-tailed distributions.As described earlier, such procedures are highly called for in applications such as finance, where assumptions on finite moments are usually deemed unreasonable.Our robust extension of the SURE-criterion is carried out via a plug-in strategy where the covariance matrix in the Gaussian SURE-criterion is replaced with a suitable scatter matrix.Especially popular in the community of robust statistics, scatter matrices are a class of statistical functionals that measure the dispersion/scatter/variation in multivariate data while (usually) being far more robust to the impact of heavy tails and outliers than the covariance matrix, see Section 3 for their definition and several examples.We consider three different plug-in estimators, depending in which form of the Gaussian SURE-criterion the scatter matrix is plugged in.The first two options lead to analytically simple estimators that depend on the data only through the eigenvalues of the used scatter matrix, much like the classical estimators of dimension (see below).Whereas, the third strategy is more elaborate and involves computing particular derivatives of the scatter functional (and the companion location functional).
As our secondary objective, we conduct an extensive simulation study where the proposed methods are compared to each other and to several (families of) competing estimators from the literature.These competitors include: (i) The classical estimator based on successive asymptotic hypothesis testing for the equality of the final eigenvalues of a chosen scatter matrix (testing for subshpericity), see [17,23].(ii) Variation of the previous estimator where the null distributions are bootstrapped instead of relying on asymptotic approximations [17].(iii) The general-purpose procedure for inferring the rank of a matrix from its sample estimate known as the ladle, which we apply to select scatter matrices, see [12].(iv) The SURE-estimator of [31] which can be seen as the non-robust version of our proposed estimator.We are not aware of comparisons of similar magnitude being conducted earlier in the literature.
To summarize the results of our simulation study (given in Section 5), they reveal that the SURE-based robust methodology for the determination of the latent dimension is: (i) Accurate, achieving comparable or superior estimation results to its competitors in various data scenarios.(ii) Flexible, that is, it allows the free selection of the used robust scatter matrix.This is in strict contrast to its closest competitor, the asymptotic hypothesis test mentioned above, which is (for theoretical reasons) "locked" to operate with a specific, slow-to-compute scatter matrix.(3.) Fast, requiring no bootstrap replicates or any kind of resampling.Due to these three properties, we find the method an especially attractive tool for data-rich large-scale applications.

Organization of the manuscript
The manuscript is organized as follows.In Section 2 we recall the Gaussian SURE-criterion of [31] for estimating the latent dimension.In Sections 3 and 4 we propose three different robust extensions of the criterion through the use of different pairs of location and scatter functionals.Sections 5 and 6 contain the simulation study and an empirical (financial) example on asset returns, respectively.In Section 7 we conclude with some future research ideas.The proofs of all technical results are collected in Appendix A.

SURE criterion for Gaussian PCA
In this section, we recall how the SURE-criterion can be used to estimate the latent dimension d under the Gaussian model (2).Our derivation of the criterion differs from the original version [31] in that we employ empirical centering of the data, whereas [31] did not.We made this change to the method as it is unreasonable to assume that the true location of the data is known in practice.As a consequence, our formula for Gaussian SURE in Lemma 1 differs nontrivially from the one given in [31].
Due to the empirical centering, we assume, without loss of generality, that µ = 0 throughout the following.As in [31], we use Stein's Lemma to construct an unbiased estimator of the risk associated with estimating the signals V 0 y i by their reconstructions xi based on the first k principal components.Assuming that k = 1, . . ., p is fixed from now on and letting U k ∈ R p×k denote a matrix of (any) first k orthogonal eigenvectors of the covariance matrix S 0 := (1/n) n i=1 (x i − x)(x i − x) , the reconstructions can be written as xi ≡ xi (k) = t 0 + P k (x i − t 0 ) where P k := U k U k is the orthogonal projection onto the space spanned by the first k eigenvectors of S 0 and t 0 := (1/n) n i=1 x i is the mean vector.For convenience, we replicate an intermediate result towards the final Gaussian SURE-criterion below as Lemma 1.In the lemma the reconstructions xi are treated as functions of the original data x 1 , . . ., x n and the result implicitly assumes the former to be differentiable in the latter, sufficient conditions for which will be discussed later in Section 3. In Lemma 1, and throughout the paper, • denotes the Euclidean norm.
Lemma 1.Under model (2), the quantity is an unbiased estimator of the risk The two k-dependent terms of R 1k in Lemma 1 have natural interpretations: The term tr{(I p − P k )S 0 } measures the total variation of the data in directions orthogonal to the first k eigenvectors and takes large values when the used number of eigenvectors is insufficient to capture the full d-variate latent signal.The quantity (1/n) n i=1 p j=1 (∂/∂x ij )x ij measures the average influence an observation has on their own reconstruction and is often interpreted as the generalized degrees of freedom of the model, where large values indicate overfitting to the data set, see [26] (in the extreme case with d = p we actually have xij = x ij ).Thus, R 1k can be seen to be similar in form to Akaike's information criterion (AIC) (and other related information criteria), whose two terms also measure model fit and model complexity, respectively.
To apply the criterion R 1k in practice, we require an expression for the partial derivatives in Lemma 1.As is shown later in the context of Lemma 3 in Section 4, the partial derivatives exist under the assumption that the eigenvalues of S 0 are simple (which holds almost surely under the model (2)), and have the forms presented next.See [30,31] for similar results.
Lemma 2. Under model (2), the quantity where s 1 > • • • > s p are the eigenvalues of S 0 , is an unbiased estimator of the risk To apply the criterion R 2k in practice, an estimator for the unknown error variance σ 2 is needed and several feasible alternatives exist.For example, [13] used, in a similar context, the median of the final p/2 eigenvalues of S 0 .The resulting estimator is accurate but makes the implicit assumption that d ≥ p/2 .To avoid such difficult-to-verify conditions, we prefer to instead use the final eigenvalue s p of S 0 as the estimator of the noise variance, imposing minimal assumptions on the latent dimensionality (i.e., that d < p).Naturally, the price to pay is that s p suffers from underestimation in finite samples.Note that, to combat the underestimation, [30] proposed an alternative estimator of σ 2 based on the limiting spectral distribution of the covariance matrix under highdimensional Gaussian data.Mimicking this strategy is not viable in our scenario as any results on the limiting spectral distributions of the scatter matrices used in Section 3 are still scarce in the literature.
Plugging in the estimator s p and observing that tr{(I p −P k )S 0 } = p =k+1 s now leads to two different sample forms for the SURE criterion for Gaussian PCA: The "hat" notation for R1k , R2k signifies the fact that they have been obtained from R 1k , R 2k by replacing the unknown σ 2 with its estimator s p .In the following sections we obtain outlier-resistant alternatives to both R1k and R2k via plugging in robust measures of location and scatter in place of the mean vector and covariance matrix in (4).In addition, we will also consider an "asymptotic" version of the criterion R2k , where the terms of the order o p (1) (in the asymptotic regime where n → ∞) have been removed.Note that even though we might have s j −s → p 0 for some indices j, , the limiting distribution of √ n(s j − s ) for such indices is absolutely continuous (with respect to the Lebesgue measure) [1], meaning that the impact of the double sum in R2k can be expected to be negligible for large n.

Robust plug-in SURE criteria
Plug-in-techniques are a typical way to create outlier-resistant versions of standard multivariate methods in the community of robust statistics, see, for example, [5,20].In this spirit, we replace the mean vector t 0 and the covariance matrix S 0 in the SURE criteria (4), ( 5) with a pair (t, S) of location and scatter functionals [21], the definitions of which we recall next.Letting F be an arbitrary p-variate distribution, a location functional (location vector) t is a map is the distribution of the random vector Ax + b and x ∼ F .Similarly, a scatter functional (scatter matrix) S is a map taking values in the space of positive semi-definite matrices and obeying, for any invertible A ∈ R p×p and b ∈ R p , the transformation rule S(F A,b ) = AS(F )A .These transformation properties are typically referred to as affine equivariance.
Location and scatter functionals mimic the properties of the mean vector and the covariance matrix and typically measure some aspects of the center and spread of a distribution, respectively.In particular, if F is the elliptical model ( 1), then t(F ) = µ and S(F ) = τ S,F V D 2 V for all location and scatter functionals (t, S) for which t(F ) and S(F ) exist, where the scalar τ S,F > 0 depends on both the exact distribution of the spherical z i and on the used scatter functional, see [21,Theorem 3.1].Hence, all choices of (t, S) estimate, up to scale, the same quantities under the elliptical model, implying that the replacing of the mean vector and the covariance matrix in SURE with the pair (t, S) is warranted (at least in the Gaussian special case (2) of the elliptical model, under which the SURE criteria in Section 2 were derived).Note that this equivalence of different (t, S)-pairs does not necessarily mean that the dimension estimates given by different choices of (t, S) should always be equal.Namely, the equivalence indeed holds under the population level model (1), but in practical situations the accuracy of the estimates is greatly influenced by the finite-sample properties (in particular, robustness properties) of the used location and scatter functionals.This is clearly evident in the simulation results of Section 5.
Examples of popular location and scatter functionals are given later in this section and we assume, for now, that we have selected some robust locationscatter pair (t, S).Outlier-resistant versions of the forms R2k and R3k of the Gaussian SURE criterion in ( 4) and ( 5) are then straightforwardly obtained.Namely, we simply replace the eigenvalues s j of the covariance matrix S 0 with the eigenvalues of the scatter functional S in the definitions.Note that while the location functional t does not play an explicit role in this construction, it is usually a part of the definition of S, see for example the spatial median and the spatial sign covariance matrix later in this section.
As an alternative to the above, rather simplistic plug-in estimators, we consider also a more elaborate extension based on the form R1k of SURE in (4) where, in addition to S 0 , we also replace the partial derivatives (∂/∂x ij )x ij with their counterparts based on the robust pair (t, S).That is, the robust version of R1k uses the reconstruction estimates xi = t + P k (x i − t) where the centering is done with the robust location functional t (instead of t 0 ) and the projection matrix P k is now taken to be onto the space spanned by the first k eigenvectors of the robust scatter functional S (instead of S 0 ).Due to its more technical nature in comparison to the other two criteria, we have postponed the discussion of the extension of R1k to Section 4.
Finally, regardless of which of the three criteria R1k , R2k and R3k one uses, the corresponding estimate d of the latent dimension d is obtained as the minimizing index, d = argmin k=0,...,p−1 Rjk .
We next recall several popular options for the location-scatter pair (t, S).

Mean vector and covariance matrix
The most typical choice for the pair (t, S) is the mean vector and the covariance matrix, i.e., where E F means that the expectation is taken under the assumption that x ∼ F .This choice simply leads to the Gaussian SURE-criterion discussed in Section 2.
As discussed before, this option is, despite often being the optimal choice under the assumption of normality, also highly non-tolerant against outliers and heavy tails.

Spatial median and spatial sign covariance matrix
The spatial median t(F ) of a distribution F is defined as any minimizer of the convex function The spatial median is one of the oldest and most studied robust measures of multivariate location, see, [3,9], and reverts to the univariate concept of median when p = 1.It can be shown to exist for any F (in particular, no moment conditions are required) and it is unique as soon as F is not concentrated on a line in R p [16] which is guaranteed, in particular, almost surely when F is absolutely continuous.The standard scatter functional counterpart for the spatial median is the spatial sign covariance matrix (SSCM), defined as, where t(F ) is the spatial median of F , which is assumed to be unique, and the sign function u : R p → R p is defined as u(x) = x/ x for x = 0 and u(0) = 0. Like its location counterpart, also the SSCM has been extensively studied in the literature, see, for instance, [7,15,33].
The defining feature of SSCM is that it depends on the data only through the "signs" u(x − t(F )), giving equal weight to points in a given direction regardless of their norm (which, in turn, is what makes the SSCM robust to outliers).Especially for high-dimensional data, this loss of information introduced by the discarding of the observation magnitudes is relatively small as it represents losing only a single degree of freedom in the p-dimensional space (whereas the sign contains the remaining p − 1 degrees of freedom).
We also note that the spatial median and the spatial sign covariance matrix are, strictly speaking, not a pair of location and scatter functionals in the usual sense as they satisfy the equivariance properties listed in Section 2 only when the matrix A is orthogonal.However, this is not an issue in our scenario for the following reasons: (i) The spatial median is a consistent estimator of the location parameter in the elliptical model (1) under minor regularity conditions, see, e.g., [14].(ii) Under the elliptical model ( 1), two eigenvalues s j , s of the SSCM are equal if and only if the corresponding elements σ j , σ of the matrix D are equal, see [7].Hence, the eigenvalues of the spatial sign covariance matrix contain the same (qualitative) information about the latent signal dimension as those of any "proper", affine equivariant scatter functional.However, the spatial sign covariance matrix is also known to compress the range of the eigenvalues [34], making it more difficult to distinguish between the signal and the noise and, thus, we next consider an alternative to it.This alternative is known as Tyler's shape matrix and is often seen as the affine equivariant version of the SSCM.

Tyler's shape matrix
Tyler's shape matrix [28] is one of the earliest proposed and most studied scatter functionals, see, e.g., [6,35].Using it requires a location functional t and, in the following, we take this to be the spatial median, as is common in the literature.Tyler's shape matrix S(F ) is defined as any S with det(S) = 1 and satisfying the following fixed-point equation, A unique solution S(F ) is obtained as soon as F does not concentrate too heavily on a subspace in R p , see [6] for the exact conditions.Inspection of the equation ( 7) also reveals that any solution S to it is defined only up to its scale and, to obtain a unique representative, a popular choice is indeed to use the determinant condition det(S) = 1 to fix the scale of solution, see [22].Consequently, S(F ) does not describe the full scatter of F but only its shape (scale-standardized scatter).However, this is sufficient for our purposes as scaling preserves the ordering of the eigenvalues of S(F ) and, hence, their division into signal and noise.Note that this also means that Tyler's shape matrix satisfies the affine equivariance property discussed in the beginning of Section 3 only up to scale, The computation of S(F ) can be shown to correspond to a geodesically convex minimization problem [35], meaning that an efficient algorithm for its estimation in practice is straightforwardly constructed.In our simulations we have used the R-package ICSNP [19] for this purpose.

Hettmansperger-Randles estimator
As our final choice for the location scatter pair (t, S), we consider the so-called Hettmansperger-Randles (H-R) estimator, which was originally introduced in the context of robust location estimation (and the associated shape functional was obtained as a "by-product" of the location estimation) [10].The H-R pair (t(F ), S(F )) is defined as any (t, S), with det(S) = 1 and satisfying the following pair of fixed-point equations, Observing that the LHS of the first equation in ( 8) is, disregarding the matrix S −1/2 , the gradient of the objective function ( 6) of the spatial median, we see that the H-R pair (t(F ), S(F )) can be interpreted as simultaneously determined spatial median and Tyler's shape matrix.This concurrent estimation of location and scatter (or, rather, shape as again any solution S to the fixed-point equations is unique at most up to scale) then makes the resulting estimator affine equivariant (up to scale in case of S).Despite its attractiveness, the theoretical properties of the H-R estimator have garnered less attention in the literature when compared to its previously introduced alternatives.In particular, we are not aware of any studies investigating conditions that would guarantee the uniqueness of the solution (t, S).
To summarize, the three robust alternatives to the mean-covariance pair introduced in this section can all be seen to estimate analogous quantities, while at the same time forming a sort of "hierarchy" with respect to their equivariance properties: (i) the spatial median and SSCM satisfy affine equivariance only for orthogonal A, (ii) replacing SSCM with Tyler's shape matrix yields the full affine equivariance property for the scatter (shape) functional and, (iii) both the location and scatter (shape) components of the H-R estimate are affine equivariant.As affine equivariance is the natural transformation property for a scatter functional to have in the presence of the elliptical model (1), we thus expect that the previous ordering applies also to the corresponding SURE-procedures' comparative performances in practice.This claim will be investigated through simulations in Section 5.

Robust extension of R1k
In this section, we explore extending the SURE-criterion R1k in (4) to accommodate an arbitrary location-scatter pair.The theoretical cost of such an extension is considerably larger than for R2k and R3k as, instead of simply plugging in eigenvalues, it involves computing the partial derivatives (∂/∂x ij )x ij .
In the sequel, let the observed sample x 1 , . . ., x n of points in R p be fixed and denote its empirical distribution by F n .Moreover, for ε > 0 we let F n,i,j,ε denote the empirical distribution of the perturbed sample x 1 , . . ., x i +εe j , . . ., x n where e j is the jth vector in the canonical basis of R p .For the extension R1k to be well-defined in the first place, t and S are naturally required to be differentiable in a suitable sense, and the next assumption formalizes this requirement.Assumption 1.For any i = 1, . . ., n and j = 1, . . ., p, there exists In order for also the projection matrix P k (onto the span of the first k eigenvectors of S(F n )) to be differentiable in the previous sense for all k = 1, . . ., p, all eigenvalues of the matrix S(F n ) must be simple.This condition, formalized in Assumption 2 below, is rather mild and, in particular, holds almost surely for both the covariance matrix and the SSCM if the points x 1 , . . ., x n are drawn from an absolutely continuous distribution.
Assumption 2. The eigenvalues of S(F n ) are distinct.
Under the previous two assumptions, the partial derivatives (∂/∂x ij )x ij exist and their sum over j has the analytical form given in the next lemma.Lemma 3.Under Assumptions 1 and 2, we have where T is the orthogonal projection onto the space spanned by the th eigenvector of S(F n ), and s is the corresponding eigenvalue.
Lemma 3 essentially says that, as soon as one obtains expressions for the quantities h ij and H ij in Assumption 1 (and Assumption 2 holds) for some particular location scatter pair (t, S), these can be plugged in to Lemma 3 to construct a version of the SURE-criterion R1k that is based on (t, S).In Lemma 4 below we have provided, for completeness, these expressions for the standard mean-covariance pair.The resulting SURE-criterion R1k is, naturally, the Gaussian SURE as described in Section 2.
Lemma 4. The mean vector and the covariance matrix satisfy Assumption 1 with Despite not offering us anything new, Lemma 4 also serves in its simplicity as a contrast to our next result, detailing the forms of h ij and H ij for the spatial median/SSCM-pair.What makes deriving these quantities more complicated, compared to the mean-covariance pair, is the fact that no analytical expression is available for the spatial median (instead, it is obtained as a minimizer of the objective function described in Section 3).Lemma 5. Assume (i) that the points x 1 , . . ., x n are not concentrated on a line in R p and, (ii) that t(F n ) = x i , for all i = 1, . . ., n.Then the spatial median and the spatial sign covariance matrix satisfy Assumption 1 with where Plugging h ij and H ij from Lemma 5 to the derivatives in Lemma 3 and consequently to R1k in (4) now gives us yet another robust criterion for determining the signal dimension.We note that while the additional assumption (ii) imposed in Lemma 5 seems to be difficult to analyze theoretically, its validity is nevertheless simply checked in practice (and the assumption (i) is satisfied, in particular, almost surely when F is absolutely continuous).
Mimicking the proof of Lemma 5 it would next be possible to derive equivalent results also for our two remaining location-scatter pairs.However, we have decided not to do so and the reasons for this are two-fold: (i) Some preliminary computations (not shown here) show that these computations lead, as with the spatial median/SSCM-pair in Lemma 5, to analytically cumbersome expressions for h ij and H ij , from which no real insight can be gained.(ii) Due to the complexity of the resulting expression (and the large number of nested summations involved), the practical usefulness of the extensions is questionable.Indeed, as our timing comparisons in Section 5.4 demonstrate, the version of R1k obtained based on Lemma 5 is several orders of magnitude inferior to R2k and R3k in computational speed, while at the same time offering no or only minuscule gains in accuracy.Some preliminary exploration reveals that this issue is still further magnified for Tyler's shape matrix.Hence, while these extensions would be technically possible to derive, we did not see any real practical value in doing so.

Simulations
In order to study the finite-sample properties of our proposed robust extensions of SURE, we conduct an array of simulation studies.As competing methods we have chosen the following set of well-established estimators from the literature, see Section 1.4 for further details: (i) The classical estimator based on an asymptotic test of subsphericity [17,23].The R-package ICtest [18] includes two implementations of it, one based on the covariance matrix and one based on the H-R estimator, and we include both of them in the comparison.
(ii) The same estimator as (i) but with the null distribution of the test estimated through bootstrap.This estimator can be based on any of the four scatter matrices described in Section 3 and we thus include all of them in the comparison.We used throughout the study 200 bootstrap samples, the default value in the implementation in ICtest [18].
(iii) The ladle estimator of [12] which, too, can be based on any of the four scatter matrices.The estimator is based on resampling, for which we used the default value 200 in the implementation in ICtest [18].
(iv) Our centered version of the SURE-estimator of [31].
The above four categories of estimators are denoted in the following as Asymp, Boot, Ladle and SURE, respectively, with the used scatter matrix given in parenthesis.E.g., Asymp(HR) denotes the asymptotic test based method using the H-R estimator.In addition, we distinguish three different versions of the SURE-estimator, SURE1, SURE2 and SURE3, referring to using the objective functions R1k , R2k and R3k , respectively.We thus have a total of 19 methods to compare, and these have been summarized in Table 1.The final four columns of the table are related to the timing study in Section 5.4.
Throughout the simulations, we measure the performance of the methods through their total proportions of correctly estimated dimensions across all replicates.

Tail thickness
In the first simulation study, we explore how the methods perform under varying levels of heavy-tailedness.As a setting for this, we consider multivariate tdistributions with the degrees of freedom equal to ν = 1, 3, 5, . . ., 25.Thus, the heaviest tails are obtained in the case ν = 1, corresponding to the multivariate Cauchy distribution.The simulation is repeated 100 times for every degree of freedom, and for every repetition a random sample consisting of n = 100 observations is generated.In each case, we take the latent dimension to be d = 6, whereas as the total dimensionality we use p = 10.The error "variance" (i.e., the square of the final diagonal elements of D in (1)) is always σ 2 = 0.5 and the signal "variances" (i.e., the squares of the first d diagonal elements of   Unsurprisingly, the classical covariance matrix based methods fail to consistently find the correct latent dimension d in the presence of too heavy tails (left side of the plot).This effect is most pronounced in the case ν = 1 where the corresponding t-distribution does not possess the finite second-order moments required by the covariance estimation.The robust methods, on the other hand, do not suffer from this issue as they make no moment assumptions on the data generating distribution.
As the degrees of freedom increase, we observe that the covariance based methods start to outperform the robust alternatives.The reason for this is that, when ν → ∞, the multivariate t-distribution approaches the normal distribution for which the covariance based methods offer optimal inference.
Comparing the methods by type (different colours in Figure 1), the overall best performances are given by the robust bootstrap (Boot, green) and asymptotic (Asymp, orange) methods, with marginal differences between the two.Somewhat surprisingly, the computationally most expensive method, i.e., the ladle, has a relatively bad performance in this scenario.Comparing the different types of SURE to each other, it seems that the additional computational and theoretical complexity of SURE1 (cyan) does not provide additional benefits when compared to SURE2 (blue) and SURE3 (red), which both have a relatively similar performance, not falling much behind Boot and Asymp.

Latent dimension
In our second simulation study, we investigate how the relative size of the underlying latent dimension d affects the methods' performances.As the main selling point of the SURE-based methods is their light computational load, we have dropped the more computationally intensive methods (Boot, Ladle) from the comparison, focusing in this (and the following) study on comparing SURE only to its most relevant competitor, Asymp.We choose SURE2 to be the "representative" of the SURE-family as, based on the first simulation study, both SURE1 and SURE3 had performance similar to it.Thus, the families of methods included in the current simulation study are Asymp and SURE2.
Recall that SURE2 estimates the latent dimension as the index minimizing the corresponding objective function k → R2k .Based on our experiments, this strategy can sometimes be quite unstable, especially when the latent dimension is comparatively small.Thus, as an experimental alternative we propose estimating d as the change point in the series of differences R2(k+1) − R2k .To understand the motivation for this, consider the following two typical forms for the graph formed by the points (k, R2k ): (i) The points (k, R2k ) form a Vshaped curve around d.In this case, the true dimension is both a minimizer and a location change point of the differences (the differences change sign at the true dimension).(ii) The graph (k, R2k ) decreases linearly until d and stays roughly constant afterwards.In this case, d is a location change point of the differences but not necessarily a minimizer (it might happen that the minimizer occurs only after d).Thus, in these two (rather idealistic) examples, the experimental change point alternative offers more consistent detection of the dimension than the standard method of seeking the minimizer.We implemented the change point detection as binary segmentation through the function cpt.meanvar in the R-package changepoint [11].The resulting method is denoted in the sequel as "SURE2 cp".
We fix the total dimensionality to p = 100 and let the latent dimension vary as d = 5, 10, 15, . . ., 95.We consider two sample sizes n = 1000, 2000 and repeat the simulation 100 times for every combination of d and n, generating in each repetition a random sample from the multivariate t-distribution with 1 degree of freedom.The error variance is fixed to σ 2 = 0.5 and the signal variances are randomly generated from the uniform distribution Unif(1,3), independently in every repetition.The proportions of correctly estimated dimensions d for the different procedures are presented in the two panels of Figure 2, separately for n = 1000 and n = 2000.
The covariance based variants of Asymp, SURE2 and SURE2 cp have the expected (bad) performance, and in the following discussion we focus only on their robust versions.The top panel of Figure 2 reveals that, when n = 1000, SURE2 starts producing good results only once the latent dimensionality d is roughly half the data dimension p.However, the doubling of the sample size from n = 1000 to n = 2000 has a drastic effect on its performance and, under  the latter, SURE2 estimates the true dimension correctly in each of the 100 repetitions (and for every value of d).SURE2 cp, on the other hand, has the opposite performance for n = 1000, giving the most accurate results when d is relatively low.However, it does not benefit as much from the increased sample size as SURE2 does.Neither of the SURE-methods appears to be much affected by the actual choice of the robust scatter matrix (SSCM, Tyler or HR).Asymp is seen to give a consistently accurate behaviour under all considered settings and, as such, offers a reasonably safe choice in practice.

Sample size
In the third simulation, we study the effect of the sample size on the estimation accuracy, including again the same set of methods as in the previous study.
The considered sample sizes are n = 500, 750, 1000, 1500, 2000, 2500, 5000.The simulation is repeated 100 times for every sample size n, such that for every repetition a random sample of n observations is generated from the multivariate t-distribution with 1 degree of freedom.We take the latent and the total dimensionalities to be d = 20 and p = 100, respectively, throughout the simulation.
As the error variance we use σ 2 = 0.5 and the signal variances are again randomly generated from the uniform distribution Unif(1,3), independently for each of the 100 replicates.The proportions of correctly estimated dimensions d for the different procedures are presented in Figure 3.
The most striking feature of Figure 3, which also sheds some light on the behaviour of SURE2 in the previous simulations, is the sudden jump in the estimation accuracy of robust SURE2 from close to 0% to 100% at around n = 1700.Interestingly, the fact that SURE2 cp has more even behaviour across the different sample sizes indicates that this jump is not so much a consequence of the SURE criterion R2k itself but of the way in which the dimension estimate is selected based on the criterion values (recall that SURE2 picks the minimizing value of k and SURE2 cp uses a more complicated change point technique).We thus conclude that the standard technique of choosing simply the minimizing index of the SURE criterion is not optimal at smaller sample sizes (in this scenario), whereas it gets very effective when n is large enough.This matter clearly warrants further investigation and, due to its complexity, we have left it for future research, see Section 7. Finally, we also observe that, overall, the used scatter matrix seems to have very little effect on the results, apart from Cov, which again breaks down in the presence of a heavy-tailed distribution.

Computation time
As our final simulation study, we compare the running times of all 19 methods included in Table 1.The change point variant of SURE2 is not included as its computational difference to the base SURE2-method is marginal and negligible compared to the differences between the methods itself.We distinguish two different sample sizes n = 200, 400 and two different dimensionalities p = 10, 20, their combinations leading to a total of four different settings.For each setting, we take the data distribution to be the multivariate t-distribution with ν = 1 degrees of freedom, d = 0.6p and the signal and noise variances as in the previous simulation.We run each of the 19 methods 10 times on each setting (using the same set of 10 data for each method) and record their computational times.The experiment was conducted on a desktop computer with AMD Ryzen 5 3600 6core processor and 16 GB RAM.The average running times in seconds are given in the final four columns of Table 1.
From Table 1 we make the following observations: (i) Computational complexity in the methods stems from two sources, the choice of the scatter matrix and bootstrap replications (as performed by Boot and Ladle), of which the latter has a significantly greater impact on the timing.(ii) The doubling of the sample size n does very little to increase the computational times, whereas the doubling of the dimension p serves to multiply the times by roughly 1.5.(iii) Of the robust methods, the fastest are by far SURE2, SURE3 and Asymp which all have computational times roughly of the same order of magnitude.SURE1 falls somewhere in between them and the more intensive Boot and Ladle.
Based on the observations made above and in the previous experiments, we summarize our simulation results as follows: The SURE-based robust methodology offers a fast alternative to the computationally heavy bootstrap-based methods, losing minimally to them in accuracy but at the same time retaining its usefulness also in the face of high-dimensional data sets.When compared to Asymp, SURE2 loses to it in accuracy when both n and the latent dimension d are small but quickly surpasses Asymp when n is increased sufficiently, at  least in the scenario we considered.Additionally, whereas Asymp is (for theoretical reasons) restricted to using the computationally heavy H-R estimator to achieve robustness, SURE2 does not have this limitation and we may replace the H-R estimator in SURE2 with, e.g., SSCM to obtain roughly a ten-fold gain in speed, based on Table 1, with almost identical accuracy.We thus conclude that the SURE-based robust methods offer fast and efficient estimators of the signal dimension in large-dimensional data-rich scenarios.

Application: asset returns
We next illustrate the robust SURE methods in a financial data set that was used to demonstrate principal component analysis in the classical textbook [27,Section 9] to search for common factors explaining (joint) asset return variability.The data itself is available on the author's (Ruey S. Tsay's) webpage and consists of monthly log stock returns (including dividends) of five stocks (IBM, The smoothed curves of latent dimensionalities estimated using the window approach as described in the main text.The two panels correspond to the window lengths of = 48 and = 72 months, respectively.HPQ, INTC, JPM, BAC) from January 1990 to December 2008. 1 These p = 5 time series of length T = 228 months are illustrated in Figure 4. [27] computed the portmanteau test statistics and found that despite the time series nature of the data there is no substantial serial correlation in returns, and hence we also ignore the serial dependence in our analysis.According to the results of PCA, [27] concluded two common factors in his interpretation: The "market component" represents the general movement of the stock market and the "industrial component" represents the difference between the two industrial sectors, namely technology (IBM, HPQ and INTC) versus financial services (JPM and BAC).In addition, [27] points out that IBM stock "has its own features that are worth further investigation".
Instead of computing only a single estimate of the latent dimension for the data set, we take a local approach and run a window of length through the data.For each of the T − +1 windows we then estimate the latent dimension using one of our proposed methods.As the obtained dimensionalities correspond to the windows and not the actual observation months, we "back-transform" them as follows: For each individual month, we take the weighted average of the estimated dimensions of all windows in which that particular month is a member.We assign the weights such that the middle two observations in each window get a weight equal to 1 and the weights decrease linearly towards the window endpoints.This procedure thus produces a "smoothed" curve of estimated latent dimensions for the full observation period.To guarantee that the ellipticity of the data is at least partially fulfilled, we resort to rather small window lengths, taking either = 48 or = 72 in the following.Visual inspection (not shown here) reveals that scatter plots of the obtained windows indeed exhibit elliptical shapes throughout the observation period.The intuition behind this somewhat experimental approach is that the latent dimension d can be seen to measure the internal complexity of the observed multivariate time series, allowing us to identify from the smoothed curve intervals of time when the stocks behave in a more unified manner (low dimension) or independently of each other (high dimension).Note that the months close to the beginning and the end of the measurement interval belong on average to a fewer number of windows, meaning that they are expected to show more erratic behavior.
For simplicity, we apply the described procedure only with SURE2, using each of the four scatter matrices (and the two window lengths), and the results are shown in Figure 5.In line with the evidence in [27], we find at least two and in most time points at least three common features in our five stock case.Our robust approaches favour three (even four) dimensions, emphasizing also other common features than the market and industrial components.Interestingly, the time-varying patterns of the estimated dimensionality in the robust approaches seem to be largely in accordance with each other and following general market conditions where the major periods of decreasing prices (i.e. the beginning of 2000s and the financial crisis 2007-2009), and their onsets, are associated with somewhat lower dimensions.Finally, note that the fact that the curves for the non-robust "Cov" are markedly different from the robust ones is a clear indication that the data indeed exhibit heavy-tailed behaviour (as typically with asset returns) that hampers the estimation of the covariance matrix but leaves the robust estimates unaffected.

Discussion
The results obtained in this work open up several avenues for future research.Perhaps most notably, our simulations revealed that the standard approach of choosing the parameter estimate to be the global minimizer of the SUREcriterion might not be optimal in the current scenario.As a simple alternative to minimization, we explored using a change point detection based approach which indeed proved superior to the minimization in various settings (and vice versa in other settings).As such, the matter clearly warrants more investigation.We also note that the dangers of "blindly" minimizing a model fit criterion are naturally well-known in the model selection literature.Indeed, also [30] mention this caveat, although, not in the context of SURE but the Bayesian information criterion.Despite this, we are not aware of any general alternatives to minimization being proposed in the literature, discounting visual inspection (which can be seen as both heuristic and subjective).Quite possibly, such procedures may not even exist as their behaviour would depend greatly on the functional form of the particular criterion in question.But, at least in the current scenario of dimension estimation, the change point criterion appears to provide a feasible option.
A second point of interest brought up by our simulations is the sudden increase in the accuracy of the robust SURE2 in Figure 3.In that particular data scenario there appears to be a "critical" sample size after which SURE2 achieves perfect estimation results.The dependence of this sample size on the model parameters, especially p and d is something to be studied and quantified.We note that any theoretical investigation of this matter is likely to be very difficult as it concerns the finite-sample properties of the method, whereas the large majority of the existing theoretical results in the robust literature are conducted in the asymptotic framework where n → ∞.
Finally, despite being a significant improvement over the Gaussian assumption, the elliptical model (1) can still be seen as somewhat limiting in practice.In particular, a consequence of the ellipticity is that the tails of the distribution are equally heavy along all directions in the space R p .One solution would be to consider the independent component model x i = µ + Az i instead, where A ∈ R p×p is full rank and the components of the p-vector z i are mutually independent random variables [4].Exactly p − d elements of z i are assumed to be Gaussian and, as such, noise, making the signal dimension of the model equal to d.By taking the signal components of z i to have different tail decays, a richer variety of heavy-tailed behaviours for x i is obtained, see [32].The independent component model admits a solution through the use of pairs of scatter functionals, see [29], and a similar approach could possibly serve as a starting point for deriving a SURE-based criterion for the estimation of the dimension d in the independent component model.

A Proofs of the technical results
Proof of Lemma 1.We write, Now, xi − x i 2 = tr{(I p − P k )(x i − t 0 )(x i − t 0 ) }, implying that averaging over i in the preceding expansion yields Hence, the claim follows as soon as we show that E(x ij ε ij ) = σ 2 E{(∂/∂x ij )x ij }.
To see this, we use the law of total expectation in conjunction with Stein's lemma to write, where the second equality uses Stein's lemma on the reconstruction xij treated as a function of the full data (which are, conditional on y 1 , . . ., y n , Gaussian), the third equality uses the fact that x ij and ε ij are equal up to translation by a constant (again, under the conditioning).
Proof of Lemma 3. We use, for brevity, the notation t := t(F n ), t * := t(F n,i,j,ε ), and similarly for other quantities.Hence, by Assumption 1, we have t * = t + εh ij + o(ε) and S * = S + εH ij + o(ε).Thus, by [25,Chapter 1.3.2], the matrix S * has a th eigenvector u * with the following first-order expansion, where u is the th eigenvector of S and T = u u is the corresponding orthogonal projection.Taking outer products on both side of (9) gives Consequently, the projection P * k onto the space spanned by the eigenvectors corresponding to the k largest eigenvalues of S * (which are, for small enough ε, simple) is P * k = P k + εA ij + o(ε), where A ij is as in the statement of the lemma (note that, by symmetry of the H ij , all terms with m ≤ k get cancelled).
The first term of (10)  Combining these two now yields the claim.
Proof of Lemma 5. We begin with h ij .Let g : R p → R be the objective function with g(t) = (1/n) n i=1 x i − t , from whose minimization the spatial median t(F n ) is found.It is straightforwardly checked that g is convex and, since, by the assumption that t(F n ) = x i for all i = 1, . . ., n, g is differentiable in a neighbourhood of t(F n ), the gradient of g must vanish at t(F n ), Define now the function f : R p+1 → R p such that x i + εe j − t x i + εe j − t .
Now, f {0, t(F n )} = 0 and, moreover, f is continuously differentiable at (0, t(F n )) and has the Jacobian D t f {0, t(F n )} = −(1/n)G where G is as defined in the statement of the lemma.Now, G is a sum of weighted projection matrices to the orthogonal complements of the lines spanned by the vectors y i = x i − t(F n ).Simplifying the expression now yields the claim.

Figure 1 :
Figure 1: Percentages of correctly estimated dimensions d as a function of the degrees of freedom ν of the multivariate t-distribution in the tail thickness simulation.The sample size is n = 100 throughout.

Figure
Figure Percentages of correctly estimated dimensions as a function of the underlying dimension d when sample size is n = 1000 (top panel) or n = 2000 (bottom panel).

Figure 3 :
Figure 3: Percentages of correctly estimated dimensions as a function of the sample size n.The scale of the horizontal axis is logarithmic.
Figure5: The smoothed curves of latent dimensionalities estimated using the window approach as described in the main text.The two panels correspond to the window lengths of = 48 and = 72 months, respectively.

Table 1 :
The estimators included in the simulation study, along with binary indicator for their robustness.The final four columns give the average running times (in seconds) of the methods over 10 replicates in various settings.