The main contributions of robust statistics to statistical science and a new challenge

In the first part of the paper, we trace the development of robust statistics through its main contributions which have penetrated mainstream statistics. The goal of this paper is neither to provide a full overview of robust statistics, nor to make a complete list of its tools and methods, but to focus on basic concepts that have become standard ideas and tools in modern statistics. In the second part we focus on the particular challenge provided by high-dimensional statistics and discuss how robustness ideas can be used and adapted to this situation.


Introduction
Robust statistics deals with deviations from ideal models and their dangers for corresponding inference procedures. Its primary goal is the development of procedures which are still reliable and reasonably efficient under small deviations from the model, i.e. when the underlying distribution lies in a neighborhood of the assumed model. Therefore, one can view robust statistics as an extension of parametric statistics, taking into account that parametric models are at best only approximations to reality.
If we consider the seminal papers [18,24,51]  This shows not only that the field is still active, but more importantly that it has penetrated mainstream statistics. In order to evaluate its impact to the general theory and practice of statistics, we do not provide an extensive review of the field, but we focus on basic ideas, concepts, and tools developed early which are the backbone of robust statistics and have become standard tools in modern statistics and have had an important impact in its development.
The paper is organized as follows. In Sect. 2 we list and discuss the main contributions of robust statistics which have penetrated mainstream statistics and have become standard ideas and tools in modern statistics. Section 3 is devoted to the particular challenge provided by high-dimensional statistics and discusses the role of robust statistics in this situation. In the last section we draw some conclusions.

Main contributions of robust statistics
In this section we focus specifically on some key ideas developed in the framework of robust statistics and analyze their impact on modern statistics and data science. This is not a full review of robust statistics, but rather a list of basic ideas which originated within the robustness literature and have become standard ideas in modern statistics.

Models as approximations
It is a basic tenet of science that models are only approximations to reality. However, perhaps because of the great success of statistical theory and practice starting from Fisher and continuing in the forties and the fifties, the implications of the sometimes stringent assumptions underlying the derivation of optimal statistical procedures have been somewhat neglected.
Tukey's seminal paper [51] opened the eyes of the statistical community about the dramatic loss of efficiency of optimal procedures in the presence of small deviations from the assumed stochastic model. Of course, good data analysts had been aware in the past of this danger, but Tukey's paper called for a systematic and theoretical investigation of this problem with the goal to develop procedures which are robust against such deviations. Perhaps this aspect is becoming even more important nowadays with the flourishing of (new) procedures and tools to analyze complex data.

Data analysis
Robust methods provide often multiple solutions to a given statistical (data-analysis) problem. For instance and at the very least, the data analyst has to decide how much robustness and efficiency s(he) would like to impose on a given procedure. This opens the door to possible multiple analyses of a statistical (data analysis) problem, a point among many others, stressed by Tukey in [52], a path-breaking paper on the future of data analysis. Almost 60 years later, this is an important issue in the present discussion about the role of data science; for a general discussion, see [11]. Incidentally, Tukey's paper was unique also in its form: it was much longer (67 pages) than a typical paper published in the Annals of Mathematical Statistics and it contained almost no explicit mathematical development. Sometimes the possibility to provide different analyses to a given data-analytic problem is viewed as a negative point. Notice however, that the seemly uniqueness and optimality of a classical statistical procedure such as the least squares estimator in the linear model, is often obtained by paying a high price either in terms of stringent stochastic assumptions (e.g. normality of the errors) or by heavy restrictions on the class of admissible procedures (e.g restriction to linear estimators) as stated by the Gauss-Markov theorem.

The minimax approach
[24] was a seminal paper and contains several important contributions. Among others, Huber provided an elegant game theoretic solution in the location model, by formalizing the robustness problem as a game between the Nature, which chooses a distribution G in the neighborhood F (F) of the model F (see (2)) and the Statistician, who chooses an estimator for the location parameter in the class {ψ} of M−estimators (see (1)), where the payoff is the asymptotic variance V (ψ, G) of the estimator. This game has a saddlepoint (G,ψ), wherẽ ψ is the Maximum Likelihood Estimator under the least favorable distributionG, i.e. the distribution minimizing the Fisher information in the neighborhood. Therefore, there exists a minimax estimatorψ, which solves the problem i.e. it minimizes the worst possible asymptotic variance of an M−estimator in the neighborhood F (F).
When F is the normal distribution,ψ is the so-called Huber function shown in Fig. 1 and the corresponding M−estimator is the Huber estimator.
This formalization of the problem through minimax theory was further exploited in [25] to formalize robust testing, with an elegant interpretation in the framework of capacities and upper and lower probabilities; see [29].

Statistical functionals
Statistical functionals play a central role in Hampel's approach to robustness; see [18,19]. The basic idea is to view statistical procedures as functionals of an underlying distribution G and study their behavior in a neighborhood of a model distribution F.
Derivatives of functionals, such as Gâteaux and Fréchet derivatives, are used to linearize a functional by means of the first term of a von Mises expansion ( [54]) and to describe its local stability. In particular, the influence function (the Gâteaux derivative in direction of a point mass) is a key tool to investigate the robustness properties of a statistical procedure and to construct new robust methods. Its boundedness is crucial to achieve local robustness. The importance of the influence function goes beyond its role in robust statistics. For instance, it has a strong connection with the jackknife and it appears as linearization of any asymptotically normal estimator and therefore in its asymptotic variance. These ideas have been extended to semiparametric and nonparametric models; see [8]. Statistical functionals are key concepts in modern statistics, e.g. in nonparametric statistic and in the bootstrap and other resampling methods.

M-estimators are solutions of estimating equations ([16]) defined at the population level by orthogonality or moment conditions
Huber ([24,26,27]) defined M-estimators as the building blocks to construct new robust estimators and investigated in detail their statistical properties. Noteworthy is his proof in [26] of the consistency and asymptotic normality of multivariate M-estimators under very weak assumptions. In this context appears the so-called sandwich estimator of the asymptotic covariance matrix of an M-estimator; see [13,26,57].
Extensions and further developments of M-estimators include the Generalized Method of Moments (when dim(ψ) > dim(β) in (1)) by Hansen ([21]), a backbone of modern econometrics because (1) are often derived from economic theory to characterize economic models, and Generalized Estimating Equations by Liang and Zeger( [34]), an important technique for the analysis of longitudinal data in biostatistics.

The breakdown point
The breakdown point ( [18]) is a measure of global reliability for a statistical procedure and gives the worst percentage of contamination that a procedure can tolerate before it becomes arbitrarily biased. It provides a worst-case scenario and it can be obtained by a typical backof-the-envelope calculation.
This concept has opened up the search for procedures with high breakdown point, which allow to separate the structure encompassing the bulk (or the majority) of data from that possibly forming an important minority group. Therefore, these are useful exploratory tools that allow to discover patterns in the data. Their development has revisited old concepts such as the depth of a data cloud ( [35,43,53]) and has open up new research directions in different areas with an important impact in data analysis and computational statistics; see e.g. the forward search in [2,3].

Teaching
In view of the points mentioned above, it seems important to include basic robustness concepts both in undergraduate and graduate curricula in statistics and data science as well as in fields of applications. This is more effective and natural than treating robust statistics as a special (exotic) and advanced topic at the graduate level. The mathematical treatment can always be adapted to the level of the course and shouldn't represent an obstacle to convey the basic ideas and tools.
For instance the influence function and the breakdown point can be viewed as familiar concepts of calculus, where the former is a derivative that can be used to linearize complicated functions, whereas the latter describes a pole of a function.

A challenge: high-dimensional statistics
Large and complex data sets are increasingly common in science and we face the challenge to provide suitable procedures for analyzing these data and to investigate their statistical properties. In this framework (e.g. when the number of variables diverges with the sample size) deviations from the assumptions can be expected to have a larger impact on statistical procedures and robust statistics is likely to play an important role; see the discussion about stability in [58].

Robustifying penalized methods
Let us first focus on penalized methods, which have been proved useful in particular for estimation and model selection in high-dimensional problems and have been studied extensively. Good reviews on the topic are provided by [15,50], and [22] and a more detailed discussion can be found in [9]. In particular, many results concerning e.g. oracle properties are available in linear regression assuming gaussian or sub-gaussian errors.
From the experience with methods without penalization, it is intuitively clear that penalized estimators based on classical likelihoods (such as Lasso based on a square loss function in linear regression) will be affected by outlying points and will suffer robustness problems. It is then natural to try and robustify these methods by modifying their loss function. Along these lines, several authors proposed robust versions of Lasso in linear models: [1] proposed a trimmed version, [33] provided screening method based on rank correlations, [7] proposed the Lasso penalty for quantile regression, [14] extended the latter by proposing an adaptive penalized estimator, and [37] and [36] used a redescending loss. All these papers include simulation studies that indicate that these robustified versions, are indeed robust under some type of deviations from the stochastic assumptions. However, there is not much work on the theoretical characterization of robustness for these and more general methods. Some exceptions are [1,55], where the authors study the breakdown point of some penalized methods for linear models, [4], where a rigorous definition of the influence function of penalized M−estimators is provided, and [49], where the theoretical properties of an adaptive version of the Huber regression estimator is investigated.
From a theoretical point of view, it is important to investigate the behavior of penalized methods not only when the errors follow the distribution of the classical model F, but when it lies in a -neighborhood of it: Under appropriate conditions for the penalized M−estimator (including the boundedness of its score function and its derivative), if this bias is not too large and the minimum signal is large enough, we obtain correct support recovery and bounded bias, i.e. a robust penalized M−estimator behaves as well as a robust oracle by providing -sparsity:β 2 = 0, for large n with high probability, where β 2 is the zero component of the parameter β; -bounded bias: in ∞ -norm: β 1 − β 1 ∞ = O(n −ζ log n + ), where β 1 is the non-zero component of the parameter β; see [6]. Notice that the score function of classical penalized methods such as Lasso is unbounded. Thus, their bias in ∞ -norm in the neighborhood is infinity.

Saturation in linear models
A complementary perspective between robustness and sparsity in linear models is provided by the so-called saturated regression model (or mean-shift outlier model): where d < n and the γ i are nonzero when observation i is an outlier. It turns out that minimizing over β and γ for a given penalty p λ (·), we obtain an estimator of β matching the one obtained by minimizing for some loss function ρ(·). This is an M−estimator for β with score function ψ(·), the derivative of ρ(·). For instance, the Huber estimator is obtained by using the Lasso penalty. This idea goes back to [45] (in the case of the Huber estimator), [12,39,46,47] (in the context of approximate message passing). It has been also successfully exploited by David Hendry and coauthors in the econometrics literature (Autometrics) as a variable selection tool and more recently as an outlier detection technique.
In the past few years this approach has become a popular tool in the Machine Learning community to enforce robustness in available algorithms. We believe that its connection to M−estimation opens the door to a beneficial cross-fertilization between the sparse modeling literature and robust statistics.

Conclusion
Robust statistics has contributed in an important way to the development of modern statistics by providing many ideas, concepts, and tools that are now part of mainstream statistics. There is no doubt that robustness will follow the present development of statistics and data analysis and face the same multiple challenges. A typical case is the development of robust procedures for high-dimensional and complex problems by machine learning algorithms; see the method of median-of-means by [32] and the method of robust gradient estimation by [41].