Abstract
The minimum covariance determinant (MCD) approach estimates the location and scatter matrix using the subset of given size with lowest sample covariance determinant. Its main drawback is that it cannot be applied when the dimension exceeds the subset size. We propose the minimum regularized covariance determinant (MRCD) approach, which differs from the MCD in that the scatter matrix is a convex combination of a target matrix and the sample covariance matrix of the subset. A data-driven procedure sets the weight of the target matrix, so that the regularization is only used when needed. The MRCD estimator is defined in any dimension, is well-conditioned by construction and preserves the good robustness properties of the MCD. We prove that so-called concentration steps can be performed to reduce the MRCD objective function, and we exploit this fact to construct a fast algorithm. We verify the accuracy and robustness of the MRCD estimator in a simulation study and illustrate its practical use for outlier detection and regression analysis on real-life high-dimensional data sets in chemistry and criminology.
Introduction
The minimum covariance determinant (MCD) method (Rousseeuw 1984, 1985) is a highly robust estimator of multivariate location and scatter. Given an \(n\times p\) data matrix \(\varvec{X}=(\varvec{x}_1,\ldots ,\varvec{x}_n)'\) with \(\varvec{x}_i=(x_{i1},\ldots ,x_{ip})'\), its objective is to find h observations whose sample covariance matrix has the lowest possible determinant. Here \(h < n\) is fixed. The MCD estimate of location is then the average of these h points, whereas the scatter estimate is a multiple of their covariance matrix. Consistency and asymptotic normality of the MCD estimator have been shown by Butler et al. (1993) and Cator and Lopuhaä (2012). The MCD has a bounded influence function (Croux and Haesbroeck 1999) and has the highest possible breakdown value (i.e. \(50\%\)) when \(h=\left\lfloor (n+p+1)/2 \right\rfloor \) (Lopuhaä and Rousseeuw 1991). The MCD approach has been applied to various fields such as chemistry, finance, image analysis, medicine, and quality control, see e.g. the review paper of Hubert et al. (2008).
A major restriction of the MCD approach is that the dimension p must satisfy \(p<h\) for the covariance matrix of any h-subset to be non-singular. In fact, for accuracy of the estimator it is often recommended to take \(n > 5p\), e.g. in Rousseeuw et al. (2012). This limitation creates a gap in the availability of high breakdown methods for so-called “fat data”, in which the number of rows (observations) is small compared to the number of columns (variables). To fill this gap we propose a modification of the MCD to make it applicable to high dimensions. The basic idea is to replace the subset-based covariance by a regularized covariance estimate, defined as a weighted average of the sample covariance of the h-subset and a predetermined positive definite target matrix. The proposed Minimum Regularized Covariance Determinant (MRCD) estimator is then the regularized covariance based on the h-subset which makes the overall determinant the smallest.
In addition to its availability for high dimensions, the main features of the MRCD estimator are that it preserves the good breakdown properties of the MCD estimator and is well-conditioned by construction. Since the estimated covariance matrix is guaranteed to be invertible it is suitable for computing robust distances, and for linear discriminant analysis and graphical modeling (Öllerer and Croux 2015). Furthermore, we will generalize the C-step theorem of Rousseeuw and Van Driessen (1999) by showing that the objective function is reduced when concentrating the h-subset to the h observations with the smallest robust distance computed from the regularized covariance. This C-step theorem forms the theoretical basis for the proposed fast MRCD estimation algorithm.
The remainder of the paper is organized as follows. In Sect. 2 we introduce the MRCD covariance estimator and discuss its properties. Section 3 proposes a practical and fast algorithm for the MRCD. The extensive simulation study in Sect. 4 confirms the good properties of the method. Section 5 uses the MRCD estimator for outlier detection and regression analysis on real data sets from chemistry and criminology. The main findings and suggestions for further research are summarized in the conclusion.
From MCD to MRCD
Let \(\varvec{x}_1,\ldots ,\varvec{x}_n\) be a dataset in which \(\varvec{x}_i=(x_{i1},\ldots ,x_{ip})'\) denotes the i-th observation (\(i=1,\ldots ,n\)). The observations are stored in the \(n\times p\) matrix \(\varvec{X}=(\varvec{x}_1,\ldots ,\varvec{x}_n)'\). We assume that most of them come from an elliptical distribution with location \(\varvec{\mu }\) and scatter matrix \(\varvec{\Sigma }\). The remaining observations can be arbitrary outliers, and we do not know beforehand which ones they are. The problem is to estimate \(\varvec{\mu }\) and \(\varvec{\Sigma }\) despite the outliers.
The MCD estimator
The MCD approach searches for an h-subset of the data (where \(n/2 \leqslant h < n\)) whose sample covariance matrix has the lowest possible determinant. Clearly, the subset size h affects the efficiency of the estimator as well as its robustness to outliers. For robustness, \(n-h\) should be at least the number of outliers. When many outliers could occur one may set \(h=\lceil 0.5n\rceil \). Typically one sets \(h=\lceil 0.75n\rceil \) to get a better efficiency. Throughout the paper, H denotes a set of h indices reflecting the observations included in the subset, and \(\mathcal {H}_h\) is the collection of all such sets. For a given H in \(\mathcal {H}_h\) we denote the corresponding \(h\times p\) submatrix of \(\mathbf {X}\) by \(\mathbf {X}_H\). Throughout the paper, we use the term h-subset to denote both H and \(\mathbf {X}_H\) interchangeably. The mean and sample covariance matrix of \(\mathbf {X}_H\) are then
The MCD approach then aims to minimize the determinant of \(\mathbf {S}_{\varvec{X}}(H)\) among all \(H \in \mathcal {H}_h\):
where we take the p-th root of the determinant for numerical reasons. Note that the p-th root of the determinant of the covariance matrix is the geometric mean of its eigenvalues; SenGupta (1987) calls it the standardized generalized variance. The MCD can also be seen as a multivariate least trimmed squares estimator in which the trimmed observations have the largest Mahalanobis distance with respect to the sample mean and covariance of the h-subset (Agulló et al. 2008).
The MCD estimate of location \(\mathbf {m}_{MCD}\) is defined as the average of the h-subset, whereas the MCD scatter estimate is given as a multiple of its sample covariance matrix:
where \(c_\alpha \) is a consistency factor such as the one given by Croux and Haesbroeck (1999), and depends on the trimming percentage \(\alpha = (n-h)/n\). Butler et al. (1993) and Cator and Lopuhaä (2012) prove consistency and asymptotic normality of the MCD estimator, and Lopuhaä and Rousseeuw (1991) show that it has the highest possible breakdown value (i.e., \(50\%\)) when \(h=\left\lfloor (n+p+1)/2 \right\rfloor \). Accurately estimating a covariance matrix requires a sufficiently high number of observations. A rule of thumb is to require \(n>5p\) (Rousseeuw and Van Zomeren 1990; Rousseeuw et al. 2012). When \(p>h\) the MCD is ill-defined since all \(\mathbf {S}_{\varvec{X}}(H)\) have zero determinant.
The MRCD estimator
We will generalize the MCD estimator to high dimensions. As is common in the literature, we first standardize the p variables to ensure that the final MRCD scatter estimator is location invariant and scale equivariant. This means that for any diagonal \(p\times p\) matrix \(\mathbf {A}\) and any \(p\times 1\) vector \(\mathbf {b}\) the MRCD scatter estimate \(S(\mathbf {A} \mathbf {X} + \mathbf {b})\) equals \(\mathbf {A}\mathbf {S}(\mathbf {X})\mathbf {A}'\). The standardization needs to use a robust univariate location and scale estimate. To achieve this, we compute the median of each variable and stack them in a location vector \(\varvec{\nu }_{\varvec{X}}\). We also estimate the scale of each variable by the Qn estimator of Rousseeuw and Croux (1993), and put these scales in a diagonal matrix \(\mathbf {D}_{\varvec{X}}\). The standardized observations are then
This disentangles the location-scale and correlation problems, as in Boudt et al. (2012).
In a second step, we use a predetermined and well-conditioned symmetric and positive definite target matrix \(\mathbf {T}\). We also use a scalar weight coefficient \(\rho \), henceforth called the regularization parameter. We then define the regularized covariance matrix of an h-subset H of the standardized data \(\varvec{U}\) as
where \(\mathbf {S}_U(H)\) is as defined in (2) but for \(\varvec{U}\), and \(c_\alpha \) is the same consistency factor as in (5).
It will be convenient to use the spectral decomposition \(\mathbf {T}=\mathbf {Q}{\varvec{\Lambda }}\mathbf {Q}'\) where \({\varvec{\Lambda }}\) is the diagonal matrix holding the eigenvalues of \(\mathbf {T}\) and \(\mathbf {Q}\) is the orthogonal matrix holding the corresponding eigenvectors. We can then rewrite the regularized covariance matrix \(\mathbf {K}(H)\) as
where the \(n \times p\) matrix \(\varvec{W}\) consists of the transformed standardized observations \(\varvec{w}_i = {\varvec{\Lambda }}^{-1/2}\mathbf {Q}'\varvec{u}_i.\) It follows that \(\mathbf {S}_{\varvec{W}}(H) = {\varvec{\Lambda }}^{-1/2}\mathbf {Q}' \mathbf {S}_{\varvec{U}}(H)\mathbf {Q} {\varvec{\Lambda }}^{-1/2}\).
The MRCD subset \(H_{MRCD}\) is defined by minimizing the determinant of the regularized covariance matrix \(\mathbf {K}(H)\) in (8):
Since \(\mathbf {T}\), \(\mathbf {Q}\) and \({\varvec{\Lambda }}\) are fixed, \(H_{MRCD}\) can also be written as
Once \(H_{MRCD}\) is determined, the MRCD location and scatter estimates of the original data matrix \(\mathbf {X}\) are defined as
The MRCD is not affine equivariant, as this would require that \(S(\mathbf {A} \mathbf {X} + \mathbf {b})\) equals \(\mathbf {A}\mathbf {S}(\mathbf {X})\mathbf {A}'\) for all nonsingular matrices A and any \(p\times 1\) vector \(\mathbf {b}\). As mentioned before, the MRCD scatter estimate is location invariant and scale equivariant due to the initial standardization step
Note that the construction (7) with the subset H equal to the entire sample corresponds to the regularized estimator of Ledoit and Wolf (2004). That estimator is not robust to outliers since the outliers are included in its computation. The MRCD can thus be seen as a robustification of the Ledoit–Wolf estimator by plugging in the minimum covariance determinant principle. Note that this is different from first computing the MCD scatter matrix (which might not exist due to singularity) and afterwards taking a convex combination with a positive definite matrix.
The MRCD precision matrix
The precision matrix is the inverse of the scatter matrix, and is needed for the calculation of robust MRCD-based Mahalanobis distances, for linear discriminant analysis, for graphical modeling (Öllerer and Croux 2015), and for many other computations. By (12) the MRCD precision matrix is given by the expression
When \(p>h\), a computationally more convenient form can be obtained by the Sherman-Morrison-Woodbury identity (Sherman and Morrison 1950; Woodbury 1950; Bartlett 1951) as follows:
where \(Z=\varvec{W}_{H_{MRCD}}-\mathbf {m}_{\varvec{W}}(H_{MRCD})\) and hence \(\mathbf {S}_{\varvec{W}}(H_{MRCD})=\varvec{Z}'\varvec{Z}/(h-1)\). Note that the advantage of (14) is that only a \(h\times h\) matrix needs to be inverted, rather than a \(p\times p\) matrix as in (13).
The MRCD should not be confused with the regularized minimum covariance determinant (RMCD) estimator of Croux et al. (2012). The latter assumes sparsity of the precision matrix, and maximizes the penalized log-likelihood function of each \(h-\)subset by the GLASSO algorithm of Friedman et al. (2008). The repeated application of GLASSO is time-consuming.
Choice of target matrix and calibration of \(\rho \)
The MRCD estimate depends on two quantities: the target matrix \(\mathbf {T}\) and the regularization parameter \(\rho \). For the target matrix \(\mathbf {T}\) on \(\varvec{U}\) we can take the identity matrix; relative to the original data \(\varvec{X}\) this is the diagonal matrix with the robustly estimated univariate scales on the diagonal. Depending on the application, we can also take a non-diagonal target matrix \(\mathbf {T}\). When this matrix is estimated in a first step, it should be robust to outliers in the data. A reasonable choice is to compute a rank correlation matrix of \(\varvec{U}\), which incorporates some of the relation between the variables. When we have reasons to suspect an equicorrelation structure, we can set \(\mathbf {T}\) equal to
with \(\mathbf {J}_p\) the \(p\times p\) matrix of ones, \(\mathbf {I}_p\) the identity matrix, and \(-1/(p-1)< c < 1\) to ensure positive definiteness. The parameter c in the equicorrelation matrix (15) can be estimated by averaging robust correlation estimates over all pairs of variables, under the constraint that the determinant of \(\mathbf {R}_c\) is above a minimum threshold value.
When the regularization parameter \(\rho \) equals zero \(\mathbf {K}(H)\) becomes the sample covariance \(\mathbf {S}_{\varvec{U}}(H)\;\), and when \(\rho \) equals one \(\mathbf {K}(H)\) becomes the target. We require \(0 \leqslant \rho \leqslant 1\).
To control that the matrix \(\mathbf {K}(H)\) is well-conditioned, it is appealing to bound its condition number (Won et al. 2013). The condition number is the ratio between the largest and the smallest eigenvalue and measures numerical stability: a matrix is well-conditioned if its condition number is moderate, whereas it is ill-conditioned if its condition number is high. To ensure that \(\mathbf {K}(H)\) is well-conditioned, it is sufficient to bound the condition number of \(\rho \ \mathbf {I} + (1-\rho ) c_\alpha \mathbf {S}_{\varvec{W}}(H)\). Since the eigenvalues of \(\rho \ \mathbf {I} + (1-\rho ) c_\alpha \mathbf {S}_{\varvec{W}}(H)\) equal
the corresponding condition number is
In practice, we therefore recommend a data-driven approach which sets \(\rho \) at the lowest nonnegative value for which the condition number of \(\rho \ \mathbf {I} + (1-\rho ) c_\alpha \mathbf {S}_{\varvec{W}}(H)\) is at most \(\kappa \). This is easy to implement, as we only need to compute the eigenvalues \(\lambda \) of \(c_\alpha \mathbf {S}_{\varvec{W}}(H)\) once. Since regularizing the covariance estimator is our goal and since we mainly focus on very high dimensional data, i.e. situations where p is high compared to the subset size h, we recommend prudence and therefore set \(\kappa =50\) throughout the paper. This is also the default value in the CovMrcd implementation in the R package rrcov (Todorov and Filzmoser 2009).
Note that by this heuristic we only use regularization when needed. Indeed, if \(\mathbf {S}_{\varvec{W}}(H)\) is well-conditioned, the heuristic sets \(\rho \) equal to zero. Also note that the eigenvalues in (16) are at least \(\rho \), so the smallest eigenvalue of the MRCD scatter estimate is bounded away from zero when \(\rho > 0\). Therefore the MRCD scatter estimator has a 100% implosion breakdown value when \(\rho > 0\). Note that no affine equivariant scatter estimator can have a breakdown value above 50% (Lopuhaä and Rousseeuw 1991). The MRCD can achieve this high implosion breakdown value because it is not affine equivariant, unlike the original MCD.
An algorithm for the MRCD estimator
A naive algorithm for the optimization problem (9) would be to compute \(\det (\mathbf {K}(H))\) for every possible h-subset H. However, for realistic sample sizes this type of brute force evaluation is infeasible.
The original MCD estimator (3) has the same issue. The current solution for the MCD consists of either selecting a large number of randomly chosen initial subsets (Rousseeuw and Van Driessen 1999) or starting from a smaller number of deterministic subsets (Hubert et al. 2012). In either case one iteratively applies so-called C-steps. The C-step of MCD improves an h-subset \(H_1\) by computing its mean and covariance matrix, and then puts the h observations with smallest Mahalanobis distance in a new subset \(H_2\). The C-step theorem of Rousseeuw and Van Driessen (1999) proves that the covariance determinant of \(H_2\) is lower than or equal to that of \(H_1\;\), so C-steps lower the MCD objective function.
We will now generalize this theorem to regularized covariance matrices.
Theorem 1
Let \(\varvec{X}\) be a data set of n points in p dimensions, and take any \(n/2< h < n\) and \(0< \rho < 1.\) Starting from an h-subset \(H_1,\) one can compute \(\mathbf {m}_1 = \frac{1}{h} \sum _{i \in H_1} \mathbf {x}_i\) and \(\mathbf {S}_1 = \frac{1}{h} \sum _{i \in H_1} (\mathbf {x}_i-\mathbf {m}_1)(\mathbf {x}_i-\mathbf {m}_1)'\). The matrix
is positive definite hence invertible, so we can compute
for \(i=1,\ldots ,n\). Let \(H_2\) be an h-subset for which
and compute \(\mathbf {m}_2 = \frac{1}{h} \sum _{i \in H_2} \mathbf {x}_i\), \(\mathbf {S}_2 = \frac{1}{h} \sum _{i \in H_2} (\mathbf {x}_i-\mathbf {m}_2)(\mathbf {x}_i-\mathbf {m}_2)'\) and \(\mathbf {K}_2 = \rho \mathbf {T} +(1-\rho ) \mathbf {S}_2.\) Then
with equality if and only if \(\mathbf {m}_2=\mathbf {m}_1\) and \(\mathbf {K}_2=\mathbf {K}_1\).
The proof of Theorem 1 is given in “Appendix A”.
Making use of the generalized C-step we can now construct the actual algorithm to find the MRCD subset in step 3 of the pseudocode.

In Step 3.1, we first determine the initial scatter estimates \(\varvec{S}^i\) of \(\varvec{W}\) in the same way as in the DetMCD algorithm of Hubert et al. (2012). This includes the use of steps 4a and 4b of the OGK algorithm of Maronna and Zamar (2002) to correct for inaccurate eigenvalues and guarantee positive definiteness of the initial estimates. For completeness, the OGK algorithm is provided in “Appendix B”. Given the six initial location and scatter estimates, we then determine in step 3.2 the corresponding six initial subsets of h observations with the lowest Mahalanobis distance. In step 3.3, we compute, for each subset, a regularized covariance, where we use line search and formula (16) to calibrate the regularization parameter in such a way that the corresponding condition number is at most 1000. This leads to potentially six different regularization parameters \(\rho _i\).
To ensure comparability of the MRCD covariance estimates on different subsets, we need a unique regularization parameter. In step 3.4, we set by default the final value of the regularization parameter \(\rho \) as the largest value of the initial regularization parameters. This is a conservative choice ensuring that the MRCD covariance computed on each subset is well-conditioned. In case of outliers in one of the initial subsets, this may however lead to a too large value of the regularization parameter. To safeguard the estimation against this outlier inflation of \(\rho \), we change the default choice, when the largest value of all initial \(\rho _i\)’s exceeds 0.1. We then set the regularization parameter at the median value of the initial regularization parameters, when this median value exceeds 0.1. Otherwise we take 0.1. In the simulation study, we find that in practice \(\rho \) tends to be well below 0.1, as long as the MRCD is implemented with a subset size h that is small enough to resist the outlier contamination. A robust implementation of the MRCD thus ensures that regularization is only used when needed.
In step 3.6, we recalculate the regularized covariance using \(\rho \) instead of \(\rho _i\) for each subset with \(\rho _i \le \rho \). We then apply C-steps until the subset no longer changes, which typically requires only a few steps. Finally, out of the resulting subsets we select the one with the lowest objective value, and use it to compute our final location and scatter estimates according to (12).
Simulation study
We now investigate the empirical performance of the MRCD. We compare the MRCD estimator to the OGK estimator of Maronna and Zamar (2002), which can also robustly estimate location and scatter in high dimensions but by itself does not guarantee that the scatter matrix is well-conditioned. The OGK estimator, as described in “Appendix B”, does not result from optimizing an explicit objective function like the M(R)CD approach. Nevertheless it often works well in practice. Furthermore, we also compare the MRCD estimator with the RMCD estimator of Croux et al. (2012). We adapted their algorithm to use deterministic instead of random subsets to improve the computation speed. The algorithm that we implemented is described in “Appendix C”.
Data generation setup In the simulation experiment we generated \(M=500\) contaminated samples of size n from a p-variate normal distribution, with \(n \times p\) taken as either \(800\times 100\), \(200 \times 100,\)\(200\times 200\) and \(200\times 400\). Since the MRCD, RMCD and OGK estimators are location and scale equivariant, we follow Agostinelli et al. (2015), henceforth ALYZ, by assuming without loss of generality that the mean \(\varvec{\mu }\) is \(\mathbf {0}\), and that the diagonal elements of \({{\varvec{\Sigma }}}\) are all equal to unity. As in ALYZ, we account for the lack of affine equivariance of the proposed MRCD estimator by generating in each replication the correlation matrix randomly such that the performance of the estimator is not tied to a particular choice of correlation matrix. We use the procedure of Sect. 4 in ALYZ, including the iterative correction to ensure that the condition number of the generated correlation matrix is within a tolerance interval around 100. To contaminate the data sets, we follow Maronna and Zamar (2002) and randomly replace \(\lfloor \varepsilon n \rfloor \) observations by outliers along the eigenvector direction of \({\varvec{\Sigma }}\) with smallest eigenvalue, since this is the direction where the contamination is hardest to detect. The distance between the outliers and the mean of the good data is denoted by k, which is set to 50 for medium-sized outlier contamination and to 100 for far outliers. We let the fraction of contamination \(\varepsilon \) be either 0% (clean data), 20% or 40%.
Evaluation setup On each generated data set we run the MRCD with different subset sizes h, taken as 50%, 75%, and 100% of the sample size n, using the data-driven choice of \(\rho \) with the condition number at most 50. As the target matrix, we take either the identity matrix (\(\mathbf {T}=\mathbf {I}_p\)) or the equicorrelation matrix (\(\mathbf {T}=\mathbf {R}_c\)), with equicorrelation parameter robustly estimated as the average Kendall rank correlation. As non-robust benchmark method we compare with the classical regularized covariance estimator as proposed in Ledoit and Wolf (2004). As robust benchmark method we take the RMCD using the same subset sizes as used for the MRCD. We also compare with the OGK estimator where the univariate robust scale estimates are obtained using the MAD or the \(Q_n\) estimator.
We measure the inaccuracy of our scatter estimates \(\varvec{S}_m\) compared to the true covariance \(\varvec{\Sigma }_m\) by their Kullback–Leibler divergence and mean squared error. The Kullback–Leibler (KL) divergence measures how much the estimated covariance matrix deviates from the true one by calculating
The mean squared error (MSE) is given by
Note that the true \(\varvec{\Sigma }_m\) differs across values of m when generating data according to ALYZ. The estimated precision matrices \(\varvec{S}_m^{-1}\) are compared by computing their MSE using the true precision matrices \(\varvec{\Sigma }_m^{-1}\).
Discussion of results The results are reported in Tables 1 and 2. Table 1 presents the simulation scenarios in the absence of outlier contamination and with \(20\%\) contamination, while Table 2 shows the results when there are \(40\%\) outliers present in the data. The left panel shows the MSE of the scatter matrices, the middle panel lists the KL divergence of the scatter matrices and the right panel reports the MSE of the precision matrices.
In terms of the MSE and the KL divergence of the covariance estimates we find that, in the case of no outlier contamination, the MRCD covariance estimate with \(h=n\) has the lowest MSE when \(n>p\). The RMCD estimators perform worse in this situation. If p becomes bigger than n, the classical regularized covariance estimator performs the best, closely followed by RMCD and MRCD. Note that for these situations, the OGK estimator has clearly the weakest performance. The performance of the MRCD estimator with \(h=\lceil 0.5 n \rceil \) is clearly less than the MRCD estimator with \(h=n\). This lower efficiency is compensated by the high breakdown robustness. In fact, for both 20% and 40% outlier contamination, the MSE and KL divergence of the MRCD with \(h=\lceil 0.5 n \rceil \) is very similar to the one in the absence of outliers, and it is always substantially lower than the MSE of the OGK covariance estimator.
When outliers are added to the data, the Ledoit–Wolf covariance matrix and the MRCD and RMCD estimators with \(h=n\) immediately break down. As expected, the MRCD and RMCD estimators with \(h=\lceil 0.75n\rceil \) perform best when there is \(20\%\) contamination and \(h=\lceil 0.5n\rceil \) is the only reliable choice when there are \(40\%\) of outliers in the data. Note that our proposed estimators outperform the OGK estimator in every situation.
Similar conclusions can be drawn for the performance of the estimated precision matrices. The MRCD and RMCD precision estimates both remain accurate in the presence of outliers as long as the subsample size h does not exceed the number of clean observations. Note that in case the number of variables is equal to or greater than the number of observations (\(p \ge n\)) and the unknown true underlying precision matrix is sparse, i.e. it contains many zeroes, the RMCD has a natural advantage since it estimates sparse precision matrices of h-subsets along the way.
The simulation study also sheds light on how the structure of the data and the presence of outlier contamination affect the calibration of the regularization parameter \(\rho \). Table 3 lists the average value of the data-driven \(\rho \) for the MRCD covariance estimator. Recall that the MRCD uses the smallest value of \(0\leqslant \rho <1\) for which the scatter matrix is well-conditioned, so when the MCD is well-conditioned the MRCD obtains \(\rho =0\) and thus coincides with the MCD in that case. We indeed find that \(\rho \) is close to 0 in the scenarios where \(h>p\) and \(h < n (1- \epsilon )\), and that \(\rho \) remains close to zero when the subset size h is small enough to resist the outlier contamination. It follows that the choice between the identity matrix or the robustly calibrated equicorrelation matrix as target matrix has only a negligible impact on the MSE, provided the MRCD is implemented with a subset size h that is small enough to resist the outlier contamination. When the number of outliers exceeds the subset size, we see that outliers induce higher \(\rho \) values.
In conclusion, the simulation study confirms that the MRCD is a good method for estimating location and scatter in high dimensions. It only regularizes when needed. When h is less than p and the number of clean observations, the resulting \(\rho \) is typically less than 10%, implying that the MRCD strikes a balance between being similar to the MCD for tall data and achieving a well-conditioned estimate in the case of fat data.
Real data examples
We illustrate the MRCD on two datasets with low n / p, so using the original MCD is not indicated. The MRCD is implemented using the identity matrix as target matrix.
Octane data
The octane data set described in Esbensen et al. (1996) consists of near-infrared absorbance spectra with \(p=226\) wavelengths collected on \(n=39\) gasoline samples. It is known that the samples 25, 26, 36, 37, 38 and 39 are outliers which contain added ethanol (Hubert et al. 2005). Of course, in most applications the number of outliers is not known in advance hence it is not obvious to set the subset size h. The choice of h matters because increasing h improves the efficiency at uncontaminated data but hurts the robustness to outliers. Our recommended default choice is \(h=\lceil 0.75 n \rceil \), safeguarding the MRCD covariance estimate against up to \(25\%\) of outliers.
Alternatively, one could employ a data-driven approach to select h. This idea is similar to the forward search of Atkinson et al. (2004). It consists of computing the MRCD for a range of h values, and looking for an important change in the objective function or the estimates at some value of h. This is not too hard, since we only need to obtain the initial estimates \(\varvec{S}^i\) once. Figure 1 plots the MRCD objective function (10) for each value of h, while Fig. 2 shows the Frobenius distance between the MRCD scatter matrices of the standardized data (i.e., \(\rho \ \mathbf {I} + (1-\rho )\mathbf {S}_{\varvec{W}}(H_{MRCD}) )\), as defined in (12)) obtained for \(h-1\) and h. Both figures clearly indicate that there is an important change at \(h=34\), so we choose \(h=33\;\). The total computation time to produce these plots was only 12 s on an Intel(R) Core(TM) i7-5600U CPU with 2.60 GHz.
We then calculate the MRCD estimator with \(h=33\), yielding \(\rho =0.1149\). Figure 3 shows the corresponding robust distances
where \(\mathbf {m}_{MRCD}\) and \(\mathbf {K}_{MRCD}\) are the MRCD location and scatter estimates of (12). The flagged outliers (red triangles) stand out, showing the MRCD has correctly identified the 6 samples with added ethanol.
Octane data: MRCD objective value (10) for different values of h
Murder rate data
Khan et al. (2007) regress the murder rate per 100,000 residents in the \(n=50\) states of the US in 1980 on 25 demographic predictors, and mention that graphical tools reveal one clear outlier.
For lower-dimensional data, Rousseeuw et al. (2004) applied the MCD estimator to the response(s) and predictors together to robustly estimate a multivariate regression. Here we investigate whether for high-dimensional data the same type of analysis can be carried out based on the MRCD. In the murder rate data this yields a total of 26 variables.
As for the octane data, we compute the MRCD estimates for the candidate range of h. In Fig. 4 we see a big jump in the objective function when going from \(h=49\) to \(h=50\). But in the plot of the Frobenius distance between successive MRCD scatter matrices (Fig. 5) we see evidence of four outliers, which lead to a substantial change in the MRCD when included in the subset.
As a conservative choice we set \(h=44\), which allows for up to 6 outliers. We then partition the MRCD scatter matrix on all 26 variables as follows:
where x stands for the vector of predictors and y is the response variable. The resulting estimate of the slope vector is then
The resulting standardized residuals are shown in Fig. 6. The standardized residuals obtained with OLS indicate that there are no outliers in the data since all residuals are clearly between the cut-off lines. In contrast, the MRCD regression flags Nevada as an upwards outlier and California as a downwards outlier. It is therefore recommended to study these states in more detail. Note that both states have very small residuals when using OLS. This is a clear example of the well known masking effect: classical methods can be affected by outliers so strongly that the resulting fitted model does not allow to detect the deviating observations.
Finally, we note that MRCD regression can be plugged into existing robust algorithms for variable selection, which avoids the limitation mentioned in Khan et al. (2007) that “a robust fit of the full model may not be feasible due to the numerical complexity of robust estimation when [the dimension] d is large (e.g., d\(\ge \) 200) or simply because d exceeds the number of cases, n.” The MRCD could be used in such situations because its computation remains feasible in higher dimensions.
Murder rate data: MRCD objective value (10) for different values of h
Concluding remarks
In this paper we generalized the minimum covariance determinant (MCD) estimation approach of Rousseeuw (1985) to higher dimensions, by regularizing the sample covariance matrices of subsets before minimizing their determinant. The resulting minimum regularized covariance determinant (MRCD) estimator is well-conditioned by construction, even when \(p>n\), and preserves the good robustness of the MCD. We constructed a fast algorithm for the MRCD by generalizing the C-step used by the MCD, and proving that this generalized C-step is guaranteed to reduce the covariance determinant. We verified the performance of the MRCD estimator in an extensive simulation study including both clean and contaminated data. The simulation study also confirmed that the MRCD can be interpreted as a generalization of the MCD. When n is sufficiently large compared to p and the MCD is well-conditioned, the regularization parameter in MRCD becomes zero and the MRCD estimate coincides with the MCD. Finally, we illustrated the use of the MRCD for outlier detection and robust regression on two fat data applications from chemistry and criminology, for which \(p>n/2\).
We believe that the MRCD is a valuable addition to the tool set for robust multivariate analysis, especially in high dimensions. Thanks to the function CovMrcd in the R package rrcov of Todorov and Filzmoser (2009), practitioners and academics can easily implement our methodology in practice. We look forward to further research on its use in principal component analysis where the original MCD has proved useful (Croux and Haesbroeck 2000; Hubert et al. 2005), and analogously in factor analysis (Pison et al. 2003), classification (Hubert and Van Driessen 2004), clustering (Hardin and Rocke 2004), multivariate regression (Rousseeuw et al. 2004), penalized maximum likelihood estimation (Croux et al. 2012) and other multivariate techniques. A further research topic is to study the finite sample distribution of the robust distances computed from the MRCD. Our experiments have shown that the usual chi-square and F-distribution results for the MCD distances (Hardin and Rocke 2005) are no longer good approximations when p is large relatively to n. A better approximation would be useful for improving the accuracy of the MRCD by reweighting.
References
Agostinelli, C., Leung, A., Yohai, V., Zamar, R.: Robust estimation of multivariate location and scatter in the presence of cellwise and casewise contamination. Test 24(3), 441–461 (2015)
Agulló, J., Croux, C., Van Aelst, S.: The multivariate least trimmed squares estimator. J. Multivar. Anal. 99, 311–338 (2008)
Atkinson, A.C., Riani, M., Cerioli, A.: Exploring Multivariate Data with the Forward Search. Springer, New York (2004)
Bartlett, M.S.: An inverse matrix adjustment arising in discriminant analysis. Ann. Math. Stat. 22(1), 107–111 (1951)
Boudt, K., Cornelissen, J., Croux, C.: Jump robust daily covariance estimation by disentangling variance and correlation components. Comput. Stat. Data Anal. 56(11), 2993–3005 (2012)
Butler, R., Davies, P., Jhun, M.: Asymptotics for the minimum covariance determinant estimator. Ann. Stat. 21(3), 1385–1400 (1993)
Cator, E., Lopuhaä, H.: Central limit theorem and influence function for the MCD estimator at general multivariate distributions. Bernoulli 18(2), 520–551 (2012)
Croux, C., Haesbroeck, G.: Influence function and efficiency of the minimum covariance determinant scatter matrix estimator. J. Multivar. Anal. 71(2), 161–190 (1999)
Croux, C., Haesbroeck, G.: Principal components analysis based on robust estimators of the covariance or correlation matrix: influence functions and efficiencies. Biometrika 87, 603–618 (2000)
Croux, C., Gelper, S., Haesbroeck, G.: Regularized Minimum Covariance Determinant Estimator. Mimeo, New York (2012)
Esbensen, K., Midtgaard, T., Schönkopf, S.: Multivariate Analysis in Practice: A Training Package. Camo As, Oslo (1996)
Friedman, J., Hastie, T., Tibshirani, R.: Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9(2), 432–441 (2008)
Gnanadesikan, R., Kettenring, J.: Robust estimates, residuals, and outlier detection with multiresponse data. Biometrics 28, 81–124 (1972)
Grübel, R.: A minimal characterization of the covariance matrix. Metrika 35(1), 49–52 (1988)
Hardin, J., Rocke, D.: Outlier detection in the multiple cluster setting using the minimum covariance determinant estimator. Comput. Stat. Data Anal. 44, 625–638 (2004)
Hardin, J., Rocke, D.: The distribution of robust distances. J. Comput. Graph. Stat. 14(4), 928–946 (2005)
Hubert, M., Van Driessen, K.: Fast and robust discriminant analysis. Comput. Stat. Data Anal. 45, 301–320 (2004)
Hubert, M., Rousseeuw, P., Vanden Branden, K.: ROBPCA: a new approach to robust principal components analysis. Technometrics 47, 64–79 (2005)
Hubert, M., Rousseeuw, P., Van Aelst, S.: High breakdown robust multivariate methods. Stat. Sci. 23, 92–119 (2008)
Hubert, M., Rousseeuw, P., Verdonck, T.: A deterministic algorithm for robust location and scatter. J. Comput. Graph. Stat. 21(3), 618–637 (2012)
Khan, J., Van Aelst, S., Zamar, R.H.: Robust linear model selection based on least angle regression. J. Am. Stat. Assoc. 102(480), 1289–1299 (2007)
Ledoit, O., Wolf, M.: A well-conditioned estimator for large-dimensional covariance matrices. J. Multivar. Anal. 88, 365–411 (2004)
Lopuhaä, H., Rousseeuw, P.: Breakdown points of affine equivariant estimators of multivariate location and covariance matrices. Ann. Stat. 19, 229–248 (1991)
Maronna, R., Zamar, R.H.: Robust estimates of location and dispersion for high-dimensional datasets. Technometrics 44(4), 307–317 (2002)
Öllerer, V., Croux, C.: Robust high-dimensional precision matrix estimation. In: Modern Nonparametric, Robust and Multivariate Methods, pp. 325–350. Springer (2015)
Pison, G., Rousseeuw, P., Filzmoser, P., Croux, C.: Robust factor analysis. J. Multivar. Anal. 84, 145–172 (2003)
Rousseeuw, P.: Least median of squares regression. J. Am. Stat. Assoc. 79(388), 871–880 (1984)
Rousseeuw, P.: Multivariate estimation with high breakdown point. In: Grossmann, W., Pflug, G., Vincze, I., Wertz, W. (eds.) Mathematical Statistics and Applications, vol. B, pp. 283–297. Reidel Publishing Company, Dordrecht (1985)
Rousseeuw, P., Croux, C.: Alternatives to the median absolute deviation. J. Am. Stat. Assoc. 88(424), 1273–1283 (1993)
Rousseeuw, P., Van Driessen, K.: A fast algorithm for the minimum covariance determinant estimator. Technometrics 41, 212–223 (1999)
Rousseeuw, P., Van Zomeren, B.: Unmasking multivariate outliers and leverage points. J. Am. Stat. Assoc. 85(411), 633–639 (1990)
Rousseeuw, P., Van Aelst, S., Van Driessen, K., Agulló, J.: Robust multivariate regression. Technometrics 46, 293–305 (2004)
Rousseeuw, P., Croux, C., Todorov, V., Ruckstuhl, A., Salibian-Barrera, M., Verbeke, T., Koller, M., Maechler, M.: Robustbase: Basic Robust Statistics. R package version 0.92-3 (2012)
SenGupta, A.: Tests for standardized generalized variances of multivariate normal populations of possibly different dimensions. J. Multivar. Anal. 23(2), 209–219 (1987)
Sherman, J., Morrison, W.J.: Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. Ann. Math. Stat. 21(1), 124–127 (1950)
Todorov, V., Filzmoser, P.: An object-oriented framework for robust multivariate analysis. J. Stat. Softw. 32(3), 1–47 (2009)
Won, J.-H., Lim, J., Kim, S.-J., Rajaratnam, B.: Condition-number-regularized covariance estimation. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 75(3), 427–450 (2013)
Woodbury, M.A.: Inverting modified matrices. Memo. Rep. 42, 106 (1950)
Zhao, T., Liu, H., Roeder, K., Lafferty, J., Wasserman, L.: The huge package for high-dimensional undirected graph estimation in R. J. Mach. Learn. Res. 13, 1059–1062 (2012)
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research has benefited from the financial support of the Flemish Science Foundation (FWO) and project C16/15/068 of Internal Funds KU Leuven. We are grateful to Valentin Todorov for adding the MRCD functionality to the R package rrcov (Todorov and Filzmoser 2009), and to Yukai Yang for his initial assistance to this work. We also thank the Editor, two anonymous referees, Dries Cornilly, Christophe Croux, Gentiane Haesbrouck, Sebastiaan Höppner, Stefan Van Aelst and Marjan Wauters for their constructive comments.
Appendices
Appendix A: Proof of Theorem 1
Generate a p-variate sample \(\mathbf {Z}\) with \(p+1\) points for which \({\varvec{\Lambda }} = \frac{1}{p+1}\sum _{j=1}^{p+1} (\varvec{z}_i-\overline{z})(\varvec{z}_i-\overline{z})'\) is nonsingular and\(\overline{z}=\frac{1}{p+1}\sum _{j=1}^{p+1}\varvec{z}_i\). Then \(\tilde{\varvec{z}}_i={\varvec{\Lambda }}^{-1/2}(\varvec{z}_i-\overline{z})\) has mean zero and covariance matrix \(\mathbf {I}_p\). Now compute \(\varvec{y}_i=\mathbf {T}^{1/2}\tilde{\varvec{z}}_i\;\), hence \(\mathbf {Y}\) has mean zero and covariance matrix \(\mathbf {T}\).
Next, create the artificial dataset
with \(k=h+p+1\) points, where \(\varvec{x}^{1}_{1},\ldots ,\varvec{x}^{1}_{h}\) are the members of \(H_1\). The factors \(w_i\) are given by
The mean and covariance matrix of \( {\tilde{\mathbf{X}}}^1\) are then
and
The regularized covariance matrix \(\mathbf {K}_1\) is thus the actual covariance matrix of the combined data set \({\tilde{\mathbf{X}}}^1\;\). Analogously we construct
where \(\varvec{x}^{2}_{1},\ldots ,\varvec{x}^{2}_{h}\) are the members of \(H_2\;\). \(\tilde{\mathbf {X}}_2\) has zero mean and covariance matrix \(\mathbf {K}_2=(1-\rho ) \mathbf {S}_2 + \rho \mathbf {T}\;\).
Denote \(d_{\mathbf {K}_1}(\varvec{\tilde{x}}) = \varvec{\tilde{x}'}( \mathbf {K}_1)^{-1}\varvec{\tilde{x}}\). We can then prove that:
in which the second inequality (23) is the condition (18).
The first inequality (22) can be shown as follows. Put \(\varvec{z}_i = (\mathbf {K}_1)^{-1/2}\varvec{x}^2_{i}\) and \(\tilde{\varvec{z}} = (\mathbf {K}_1)^{-1/2}\mathbf {m}_1\) and note that \(\overline{\varvec{z}}=(\mathbf {K}_1)^{-1/2}\mathbf {m}_2\) is the average of the \(\varvec{z}_i\). Then (22) becomes
which follows from the fact that \(\tilde{\varvec{z}} \) is the unique minimizer of the least squares objective \(\sum _{i=1}^k \Vert \varvec{z}_i - c \Vert ^2\), so (22) becomes an equality if and only if \(\tilde{\varvec{z}} =\overline{\varvec{z}} \) which is equivalent to \(\mathbf {m}_2=\mathbf {m}_1\).
It follows that
Now put
If we now compute distances relative to \(b \mathbf {K}_1\;\), we find
From the theorem in Grübel (1988), it follows that \(\mathbf {K}_2\) is the unique minimizer of \(\text{ det }(\mathbf {S})\) among all \(\mathbf {S}\) for which \(\frac{1}{k} \sum _{i=1}^k d_{\mathbf {S}}(\varvec{\tilde{x}}^{2}_{i} )=p\) (note that the mean of \(\varvec{\tilde{x}}^{2}_{i}\) is zero). Therefore
We can only have \(\text{ det }(\mathbf {K}_2) = \det (\mathbf {K}_1)\) if both of these inequalities are equalities. For the first, by uniqueness we can only have equality if \(\mathbf {K}_2=b\mathbf {K}_1\). For the second inequality, equality holds if and only if \(b=1\). Combining both yields \(\mathbf {K}_2=\mathbf {K}_1\). Moreover, \(b=1\) implies that (22) becomes an equality, hence \(\mathbf {m}_2=\mathbf {m}_1\). This concludes the proof of Theorem 1.
Appendix B: The OGK estimator
Maronna and Zamar (2002) presented a general method to obtain positive definite and approximately affine equivariant robust scatter matrices starting from a robust bivariate scatter measure. This method was applied to the bivariate covariance estimate of Gnanadesikan and Kettenring (1972). The resulting multivariate location and scatter estimates are called orthogonalized Gnanadesikan-Kettenring (OGK) estimates and are calculated as follows:
- 1.
Let m(.) and s(.) be robust univariate estimators of location and scale.
- 2.
Construct \(\varvec{y}_i=\varvec{D}^{-1}\varvec{x}_i\) for \(i=1,\ldots ,n\) with \(\varvec{D}=\text {diag}(s(X_1),\ldots ,s(X_p))\;\).
- 3.
Compute the ‘pairwise correlation matrix’ \(\varvec{U}\) of the variables of \(\varvec{Y}=(Y_1,\ldots ,Y_p)\;\), given by \(u_{jk} = 1/4 (s(Y_j+Y_k)^2-s(Y_j-Y_k)^2)\;\). This \(\varvec{U}\) is symmetric but not necessarily positive definite.
- 4.
Compute the matrix \(\varvec{E}\) of eigenvectors of \(\varvec{U}\) and
- (a)
project the data on these eigenvectors, i.e. \(\varvec{V}=\varvec{Y}\varvec{E}\;\);
- (b)
compute ‘robust variances’ of \(\varvec{V}=(V_1,\ldots ,V_p)\;\), i.e. \(\varvec{\Lambda } = \text {diag}(s^2(V_1),\ldots ,s^2(V_p))\;\);
- (c)
set the \(p \times 1\) vector \(\hat{\varvec{\mu }}(\varvec{Y}) = \varvec{E}\varvec{m}\) where \(\varvec{m}=(m(V_1),\ldots ,m(V_p))^T\;\), and compute the positive definite matrix \(\hat{\varvec{\Sigma }}(\varvec{Y}) = \varvec{E}\varvec{\Lambda } \varvec{E}^T\;\).
- (a)
- 5.
Transform back to \(\varvec{X}\), i.e. \(\hat{\varvec{\mu }}_\text {OGK}= \varvec{D}\hat{\varvec{\mu }}(\varvec{Y})\) and \(\hat{\varvec{\Sigma }}_\text {OGK}= \varvec{D}\hat{\varvec{\Sigma }}(\varvec{Y}) \varvec{D}^T\;\).
Step 2 makes the estimate location invariant and scale equivariant, whereas the next steps replace the eigenvalues of \(\varvec{U}\) (some of which may be negative) by positive numbers. In the simulation study and empirical analysis, we set m(.) to the median and s(.) to either the median absolute deviation or the Qn scale estimator. We use the implementation in the R package rrcov of Todorov and Filzmoser (2009).
Appendix C: The RMCD estimator
The RMCD as initially proposed by Croux et al. (2012) uses random subsets. Below we give its adaptation using deterministic subsets. We thank Christophe Croux and Gentiane Haesbrouck for their helpful guidelines in specifying the proposed detRMCD algorithm in which we follow closely the MRCD algorithm presented in Sect. 3. It uses the GLASSO algorithm of Friedman et al. (2008), as implemented in the package huge of Zhao et al. (2012).

Rights and permissions
About this article
Cite this article
Boudt, K., Rousseeuw, P.J., Vanduffel, S. et al. The minimum regularized covariance determinant estimator. Stat Comput 30, 113–128 (2020). https://doi.org/10.1007/s11222-019-09869-x
Received:
Accepted:
Published:
Issue Date:
Keywords
- Breakdown value
- High-dimensional data
- Regularization
- Robust covariance estimation





