Outliers in official statistics

The purpose of this manuscript is to provide a survey on the important methods addressing outliers while producing official statistics. Outliers are often unavoidable in survey statistics. They may reduce the information of survey datasets and distort estimation on each step of the survey statistics production process. This paper defines outliers to be focused on each production step and introduces practical methods to cope with them. The statistical production process is roughly divided into the following three steps. The first step is data cleaning, and outliers to be focused are that may contain mistakes to be corrected. Robust estimators of a mean vector and covariance matrix are introduced for the purpose. The next step is imputation. Among a variety of imputation methods, regression and ratio imputation are the subjects in this paper. Outliers to be focused on in this step are not erroneous but have extreme values that may distort parameter estimation. Robust estimators that are not affected by remaining outliers are introduced. The final step is estimation and formatting. We have to be careful about outliers that have extreme values with large design weights since they have a considerable influence on the final statistics products. Weight calibration methods controlling the influence are discussed based on the robust weights obtained in the previous imputation step. A few examples of practical application are also provided briefly, although multivariate outlier detection methods introduced in this paper are mostly in the research stage in the field of official statistics.


What are outliers
Outliers are extreme or atypical values that can reduce and distort the information in a dataset. The problem of how to deal with outliers has long been a concern. Barnett and Lewis (1994, p. 3), one of the pioneering books in mathematical statistics dealing with outlier detection, reference Pierce (1852) published more than 150 years ago. Eliminating outliers from estimation carries the risk of losing information, however including the risks of contamination. To deal with the problem, Barnett and Lewis (1994, p. 3) devised a principle to accommodate outliers using robust methods of inference, allowing for the use of all the data while alleviating the undue influence of outliers. We follow this principle and focus on the robust statistical methods introduced by Huber (1964) that are the most suitable for survey data processing. Therefore, statistical tests are beyond the scope of our discussion.
Some outliers in survey statistics contain a mistake of some sort that requires correction. Others may not involve a mistake, but represent a trend different from that of the majority while having a large design weight in the dataset. Careful consideration of the influence of outliers on estimation needs to be given, and statisticians compiling official statistics need to determine whether such extreme values deserve their prescribed sampling weights in terms of representativeness, as discussed by Chambers (1986).
While the UNECE Data Editing Group broadly defines outliers as observations in the tails of a distribution (Economic Commission for Europe of the United Nations 2000, p. 10), narrower definitions vary depending on the purpose of the activities in the statistical production process. Outliers require appropriate treatment at each of the processing steps; otherwise, they may negatively impact on estimation efficiency and introduce bias into the resulting statistical product. The objective of this paper is to introduce both practical methods currently in use and experimental methods in research intended for use in statistical production to address the problem of data outliers.
Most conventional outlier detection methods in the field of official statistics are univariate approaches mainly applied to the search for erroneous observations so that that they can be corrected, and entirely valid datasets can be established. A range-check to determine upper and lower thresholds for "normal" (i.e., not outlying) data is a typical example, as is the quartile method. However, such univariate methods cannot detect multivariate outliers, that is, outliers involving different relationships among the variables. In multivariate cases, scatter plot matrices or other visualization techniques have been frequently used compared to the multivariate methods because of their computational complexity and processing time or the difficulty associated with the inspection of detected multivariate outliers. Complicating also matters, with multivariate methods, just which outliers are detected can depend on which particular method is being used.
Historically, statistical tables have been the major final product of official statistics, which means that the demand for detecting multivariate outliers having different relationships among the variables has not been high. However, in 2007, the Statistics Act of Japan (Act No. 53) was revised for the first time in 60 years. The new act recognizes official statistics as an information infrastructure and promotes the use of microdata (e.g., Nakamura 2017). Given this change in policy, the need to detect multivariate outliers has increased since outliers tend to be more problematic in microdata, not only for users but also for providers, in terms of privacy protection. Besides, the practical usability of multivariate outlier detection methods is increasing with the continuing improvements being made in computer technologies both in hardware and statistical software.
In the next subsection, a general model of the statistical production process is described. The model consists of three steps: data cleaning, imputation, and estimation and formatting. The outliers to be focused on depend on the purpose of each step.
In Sect. 2, available multivariate outlier detection methods for the data cleaning step are discussed. Section 3 describes robust regression for imputation. M-estimators discussed in Sect. 3 are then extended to the ratio model in Sect. 4. Calibration of design weights to cope with outliers having large design weights is discussed in Sect. 5. Section 6 provides two examples of practical use of the introduced methods. Concluding remarks and discussing future work are in Sect. 7.

General model of the statistical production process
Figure 1 provides a general model of the statistical production process for surveys, beginning with raw electronic data. The first step is data cleaning. In this step, erroneous data are detected for correction to ensure a clean, valid dataset. The second step is imputation, where missing values are estimated and replaced as necessary to produce complete datasets for the analysis to be conducted in the next step. The final step involves estimation and formatting to produce the final statistical product.

Data cleaning
The objective of the data cleaning step is to find and correct errors and inconsistencies. Consequently, the outliers in this step are those with a high likelihood of having an error or inconsistency. Any detected outlier is checked and may leave unchanged if it is not wrong. Otherwise, it is corrected based on available information when possible, or removed and estimated upon necessity in the imputation step to ensure a clean dataset.
Section 2 focuses on is multivariate outlier detection methods, especially those for elliptical distributions, since these types of methods have not been widely used in practice.

Imputation
Missing data are often unavoidable in survey statistics. Discarding missing records may cause biased estimation even when the missing values are MAR (missing at 1 3 random) (Little and Rubin 2002, pp. 117-127). Therefore, essential variables for estimation often require missing data imputation. Since the input for the imputation step is clean data (from Step 1), the outliers here are not erroneous data but rather extreme values that may distort estimations for imputation. An example of this is high leverage points in regression estimation. Such points may have a substantial influence on the resulting estimation for imputed values.
From among the many imputation methods available, this paper focuses on linear regression and ratio imputation. In general, introducing robust estimation improves the efficiency of the imputation compared to ordinary least squares (OLS) when applied to datasets that have longer tails than the normal distribution.
Robust regression imputation is discussed in Sect. 3, followed by robust ratio imputation in Sect. 4.

Estimation and formatting
In the final step of the statistical production process, the outliers in need of attention are those having large design weights. As an illustration, suppose a particular record in a household survey has a design weight of 1000 and a household income of 5 million yen (approximately 46,000 USD) per month (an atypically high-income level). This is very likely to cause a problem in the statistical tables produced from the survey. This one very wealthy household is treated as a representative of 1000 other households in the area that were not surveyed. As a consequence, the population estimate of the household income for the area will reflect that there are 1000 Fig. 1 General process of statistics production with relation to outliers households with a monthly income of 5 million yen. Design weight calibration based on such "outlyingness" is discussed in Sect. 5.

Multivariate outlier detection methods for elliptically distributed datasets
We begin with outlier detection methods for unimodal numerical data, first establishing the difference between univariate and multivariate methods, and then introducing several multivariate methods with desirable characteristics. These methods introduced in this section are mainly used for data cleaning purposes.

Univariate methods versus multivariate methods
Univariate methods for numerical data are conventionally used in the data cleaning step to identify erroneous observations. A common practice is to set the thresholds for valid data (i.e., non-outliers) at a distance of three-sigma (or more depending on its distribution) from the mean of a target dataset. This method is essentially the idea of a control chart in the field of total quality management (TQM); however, this simple method is not robust as the thresholds are supposed to be decided with a dataset in stable condition (i.e., a dataset without outliers) (Teshima et al. 2012, pp. 173-174). It is well known that with the three-sigma rule or any other non-robust method, deciding thresholds with contaminated datasets induces a masking effect, and therefore, thresholds of such methods must be determined with datasets free from outliers. We need robust methods to determine thresholds with contaminated datasets. Noro and Wada (2015) illustrate the problem and recommend using order statistics such as the interquartile range (IQR). A box-and-whisker plot using the IQR, as proposed by Tukey (1977), is commonly used when the target dataset is slightly asymmetric. If the dataset is highly asymmetric, an appropriate data transformation may be necessary before applying the method. The scatterplot in Fig. 2 highlights the differences between robust methods and their non-robust counterparts, as well as the distinction between univariate and multivariate methods. It displays the Hertzsprung-Russell star dataset (Rousseeuw and Leroy 1987, p. 28), which contains extreme outliers. The yellow-colored rectangular area shows the thresholds according to the three-sigma rule; the green area shows the thresholds identified by the box-and-whisker method. Both are univariate methods. The orange lines in the diagram show probability ellipses drawn with a mean vector and covariance matrix. Although this represents a multivariate approach, it, too, induces the masking effect as well as the three-sigma rule when applied to contaminated datasets. The red probability ellipses are drawn using modified Stahel-Donoho (MSD) estimators produced by robust principal component analysis (PCA) based on Béguin and Hulliger (2003). MSD and other multivariate methods are discussed in the next subsection.

Multivariate outlier detection methods for elliptical distributions
To evaluate and compare current methods for the editing and imputation of data, Eurostat conducted the EUREDIT project between March 2001 and February 2003. A series of reports were published and made available at https ://www.cs.york.ac.uk/ eured it/, along with five papers published in the Journal of the Royal Statistical Society. In one of the papers, Béguin and Hulliger (2004) note that NSOs had not used multivariate methods except for the Annual Wholesale and Retail Trade Survey (AWRTS) in Statistics Canada. Franklin and Brodeur (1997) report that modified Stahel-Donoho (MSD) estimators have been adopted for AWARTS and describe the algorithm. Béguin and Hulliger (2003) suggest several improvements to the estimators. Wada (2010) implemented both the original MSD and improved estimators in R and confirmed that the suggestions by Béguin and Hulliger (2003) do indeed improve performance, while the improved version of MSD estimators suffered from the curse of dimensionality. Since the improved version is incapable of processing more than 11 variables with a 32-bit PC, Wada and Tsubaki (2013) implemented an R function by parallel computing so that the function can be applied to higherdimensional datasets. Béguin and Hulliger (2003) suggest guiding principles for outlier detection, including good detection capability, high versatility, and simplicity. They examined several methods to estimate a mean vector and covariance matrix for elliptically distributed datasets with a high breakdown point compared to M-estimators (Huber 1981), as well as other desirable properties such as affine and orthogonal equivariance. The methods include Fast-MCD (Rousseeuw and van Driessen 1999), which approximates the minimum covariance determinant (MCD) proposed by Rousseeuw (1985) and Rousseeuw and Leroy (1987); BACON by Billor et al. (2000), named for Francis Bacon; and the Epidemic Algorithm (EA) proposed by Hulliger and Béguin (2001), in addition to the MSD estimators used by Statistics Canada. Béguin and Hulliger (2003) compared some of these methods and found that BACON showed better detection capacity than EA for the UK Annual Business Inquiry (ABI) dataset; however, they conclude that this particular dataset does not require a sophisticate robust method.

Multivariate outlier detection for regression imputation
After removing or correcting erroneous data in the data cleaning step, the next step is the imputation of missing values of essential variables. From the variety of imputation methods available, the focus here is on regression imputation. Typically, OLS is used to estimate the parameters of a linear regression model; however, it is well known that the existence of outliers makes such parameter estimation unreliable. After going through the data cleaning step, survey datasets may still contain outliers in another sense. These remaining outliers are assumed to be correct; however, any extreme values in the long tails of a data distribution carry the risk of distorting the parameter estimation used for imputation regardless of their correctness. OLS regression requires to remove such outliers manually. Survey observations are divided into (sometimes a large number of) imputation classes so that a uniform response mechanism is assumed within it. Parameter estimation is conducted in each imputation class separately. A robust regression method relieves us of the burden to remove outliers from each imputation classes beforehand.
We examine M-estimation for regression, which is one of the most popular methods in this section. Disadvantages of M-estimation is also introduced together with other methods to cope with the disadvantages.

Parameter estimation of the location and regression
Generally, an M-estimate is defined as the minimization problem of for any estimate T n with independent random variables x 1 , … , x n . Suppose an arbitrary function has a derivative (x; ) = ( ∕ ) x i ; , T n satisfies the implicit equation Huber (1964) discusses the robust estimation of a mean vector, proposes M-estimation of a location with ∑ n i=1 � x i − T n � = 0 , and proves their consistency as well as asymptotic normality. Huber (1973) then extends the idea to the regression model with an objective variable y = y 1 , … y n ⊤ , where the error term The M-estimators minimizes on condition that is differentiable, convex, and symmetric around zero. The estimation equation is Due to the condition on described above, is supposed to be a bounded and continuous odd function, since = � . Residuals y i − x ⊤ i are standardized by a measure of scale to make the estimation scale equivariant. Then, is estimated by solving with a weight function defined as w i = e i ∕e i and w i = w e i . Then, it can be reexpressed as It can be re-expressed in a matrix form as X ⊤ WX = X ⊤ Wy , and consequently, is estimated by where X = x 1 , … , x n ⊤ is a n × (p + 1) matrix of the explanatory variable, W = diag w i is a n × n diagonal matrix of weights. After all, M-estimators for regression can be regarded as weighted least squares (WLS) estimators with their weights based on the residuals. (1) Japanese Journal of Statistics and Data Science (2020) 3:669-691

IRLS algorithm for regression
The intercept of M-estimators for regression is location equivariant, and the slope is location invariant; however, they are not scale equivariant when the scale parameter is given. Scale equivariance is achieved by estimating the scale parameter simultaneously and using it to standardize the residuals. Beaton and Tukey (1974) propose the IRLS algorithm to solve (3) with simultaneous estimation of the scale parameter. Holland and Welsch (1977) recommend it rather than Newton's method, which is theoretically desirable but difficult to implement, or Huber's method (Huber 1973;Bickel 1973), which requires more iterations. The IRLS algorithm requires an appropriate initial estimate ̂ (0) and use it to obtain better next estimate of ̂ (1) together with ̂ based on the equation, The calculation is repeated until a conversion condition is met. The superscript j represents the iteration number.
There are some choices of measure for ̂ . It will be discussed with a selection of a weight function since they are closely related.

Weight functions and measures of scale
Robust weights w i in (2) are computed based on a weight function. Although there are a variety of choices (see, e.g., Antoch and Ekblom 1995;Zhang 1997), we discuss the most popular two weight functions here among them. One is called Huber's weight function proposed by Huber (1964). This weight function is proved to have a unique solution regardless of the initial values (e.g., Maronna et al. 2006, p. 350) and its estimation efficiency is high with normal or nearly normal datasets (e.g., Hampel 2001; Wada and Noro 2019). The other is Tukey's biweight function by Beaton and Tukey (1974). This weight function performs well with datasets with longer tails, while it does not promise a global solution unlike Huber's weight function. The difference between these two weight functions is based on the behavior of extreme outliers. Tukey's function gives zero weight to observations very far from others, while Huber's function never gives zero weight and it cannot escape from the influence of extreme outliers. The tuning constants c in (4) and k in (5)  where residuals r i = y i − x ⊤ i . Huber's weight function is commonly used with MAD. Tukey's biweight function also used with MAD (e.g., Holland and Welsch 1977;Mosteller and Tukey 1977, 9. 357); however, there are also some cases with average absolute deviation (AAD), Andrews et al. (1972), who conducted a large-scale Monte Carlo experiment involving robust estimation of the location parameter, show that the MAD is better than the AAD or IQR for M-estimators; however, it has not been proved that MAD is better than other scale parameters in the case of regression (Huber and Ronchetti 2009, pp. 172-173.). Holland and Welsch (1977) compare some weight functions with MAD as the measure of scale and show Huber weight function has better efficiency than the biweight function by a Monte Carlo experiment, while Bienias et al. (1997) use Tukey's biweight function with an AAD scale and mention its convergence efficiency.
Wada and Noro (2019) made a comparison of the four estimators combined these two weight functions and the measures of scale by conducting a Monte Carlo experiment with long-tailed datasets with asymmetric contamination. It is known that the 95% asymptotic efficiency on the standard normal distribution is obtained with the tuning constant k = 1.3450 for Huber's function (e.g., Ray 1983, p. 108), and c = 4.6851 for the biweight function (e.g., Ray 1983, p. 112). These figures are based on the standard deviation (SD), and the corresponding figures of MAD and AAD can be obtained by the relations with cumulative distribution function of the standard normal distribution Φ where SD , MAD and AAD are scale parameters based on SD, MAD and AAD, respectively. Wada and Noro (2019) obtain the results, as shown in Table 1, and compared the four estimators based on the standardized tuning constants shown in Table 2. The range of those constants is for the biweight functions with AAD based on Bienias et al. (1997) of Tukey's k , which is a part of the reports for official statistics called the Euredit Project conducted from 2000 to 2003 (Barcaroli 2002) funded by Eurostat. The smaller value of these tuning constants makes the estimation more resistant to outliers, while larger value increases efficiency in estimation. Wada and Noro (2019) conclude that AAD is computationally more efficient than the widely used MAD for both weight functions. Besides, AAD is more suitable than MAD for Tukey's biweight function. Their compared estimators are available at a public repository (see Table B in Appendix). Wada and Tsubaki (2018) suggest choosing between these two weight functions based on purpose. They suggest Tukey's biweight function rather than Huber's weight in case of imputation, since the breakdown point of M-estimators for regression is 1∕n. It is the same as in OLS. Rousseeuw and Leroy (1987) report that the oldest definition of the breakdown point was given by Hodges (1967) regarding univariate parameter estimation and that Hampel (1971) generalized it. The definition offered by Donoho and Huber (1983) is for a finite sample: Given sample size n for any sample, let and let T be the regression estimator applied to Z . A new sample, Z ′ , is created by replacing m of the observations arbitrarily in Z . Let bias(m;T, Z) be the maximum bias produced by the contamination of the replacements in the sample. The value of bias(m;T, Z) is determined as follows: If bias(m;T, Z) is infinite, the indication is that contamination of size m breaks down the estimator. In general, the finite-sample breakdown point of estimator T for sample Z is This can be regarded as the ratio of the smallest number of outliers that can make the value for T arbitrarily far from what is obtained. A breakdown point of 1∕n means that only one extreme observation in a dataset of any size can adversely affect the estimation and that the breakdown point reaches nearly 0% with large samples. Nevertheless, Tukey's biweight function can eliminate the influence of extreme observations by giving zero weight, unlike Huber's weight function. It is the reason recommended for imputation. Those outliers are only ignored in estimating imputed values, while they are used in survey enumeration. On the other hand, if M-estimators for regression are used for population estimation, i.e., directly estimating the figures appeared in final products such as statistical tables, Huber's weight function might be more suitable as it never gives zero weight. Giving zero weight to observations in producing final survey statistics means discarding valid observations. Generally, survey statisticians working for official statistics avoid wasting precious data, since they are obtained from questionnaires filled by respondents who bear the burden to respond with goodwill.

Robust estimators to cope with outliers in explanatory variables
M-estimators have another weakness in addition to the low breakdown point that the estimators are not robust against outliers in explanatory variables. LMS (Least Median of Squares) proposed by Hampel (1975) and extended by Rousseeuw (1984), LTS (least trimmed squares) by Rousseeuw (1984), S-estimator by Rousseeuw and Yohai (1984) have higher breakdown points than M-estimators and can also cope with outliers in the explanatory variables. Unfortunately, all of them have difficulty with computation. (See, e.g., Rousseeuw and Leroy 1987;and Huber and Ronchetti 2009, pp. 195-198 for more details.) The use of these estimators may still be in the research stage in the field of official statistics, while the software is available and may widely be used in some other fields. Generalized M (GM)-estimators and MM-estimators are popular methods. GM-estimators are introduced by Schweppe (as given in Hill 1977), and Coakley and Hettmansperger (1993). Their algorithms and software are available in Wilcox 2005. MM-estimators are first presented by Yohai (1987). Wilcox (2005) implemented an R function called bmreg for Schweppe-type GM-estimators and chreg for the other GM-estimators by Coakley and Hettmansperger (1993). In CRAN package, robustbase also have lmrob function, which implements both MM-estimators by Yohai (1987) and SMDS-estimators by Koller and Stahel (2011). Koller and Stahel (2011) achieve a 50% breakdown point and 95% asymptotic efficiency by improving MM-estimators. * n (T, Z) = min m n ;bias(m;T, Z) is infinite . Bagheri et al. (2010) compare M-estimators, MM-estimator, Schweppe-type GMestimator, and the GM-estimator proposed by Coakley and Hettmansperger (1993), concluding that the GM-estimators proposed by Coakley and Hettmansperger were the best among the group. Wada and Tsubaki (2018) examine M-estimators and GM-estimators by Coakley and Hettmansperger (1993) for weight calibration, which will be discussed in Sect. 5. They mention that the explanatory variables chosen for imputation are often selected from among the auxiliary variables used for stratification in sample surveys. If this is the case, outliers in explanatory variables are not expected, and M-estimators could be more suitable than GM-estimators. GM-estimators reduce the robust weight of leverage points in addition to the outliers in the objective variable. It provides robustness while reduces estimation efficiency.

Difference between regression imputation and ratio imputation
In regression imputation, missing values y i in the target variable are replaced by estimated values ŷ i based on a regression model with auxiliary x variables using complete observations regarding all those x and y in the target dataset (e.g., De Waal et al. 2011, p. 230).
Ratio imputation is a special case of regression imputation (De Waal et al. 2011, pp. 244-245), where missing y i are replaced by the ratio of y i to a single observed auxiliary x i . Specifically, the ratio model is where missing y i are replaced by ŷ i =̂x i with the estimated ratio where data i = 1, … , n of (x, y) are observed n elements in the imputation class. (See Cochran 1977, pp. 150-164 for further details.) The ratio model (6) resembles the single regression model without an intercept, However, there is a difference in the error terms. The error term for the ratio model (6) is heteroscedastic and expressed as i ∼ N 0, x i 2 with scale parameter , while the error in the regression model (8) is homoscedastic, as described for the regression model (1), and expressed as i ∼ N 0, 2 . Because of this error term difference, the ratio model has an advantage in imputation over regression models in its ability to fit heteroscedastic datasets without data transformations that can make the estimation of means and totals unstable. On the other hand, the heteroscedastic error is an obstacle to robustifying the ratio estimator by means of M-estimation.

Generalization and robustification of the ratio model
For robustification, Wada and Sakashita (2017) and Wada et al. (2021) re-formulate the original ratio model with the heteroscedastic error term i as follows: since the two error terms discussed above have the relation, i = √ x i i . They then extend the model to with an error term proportional to x i . The corresponding ratio estimator becomes When = 1∕2 , the model (9) and the estimator (10) correspond to the original ratio model (6) and the estimator (7). According to the value of , the generalized model has different features. It also corresponds to the single regression model with an intercept when = 0 . Takahashi et al. (2017) also discuss the same model regarding the datasets following the log-normal model and proposed estimation of .
The robustified generalized ratio estimator by Wada and Sakashita (2017) and Wada et al. (2021) is where w i is obtained by a weight function with homoscedastic quasi-residuals, and a scale parameter . Wada and Sakashita (2017) and Wada et al. (2021) considered the generalized ratio model with fixed values, which requires model selection before estimation for imputation.  proposed eliminating the model selection step by simultaneously estimating and in (11) by means of two-stage least squares (2SLS) (e.g., Greene 2002, p. 79) with iterations. The initial estimate of is obtained by OLS under the model (6). This estimation is not efficient; however, unbiased under heteroscedasticity. Using the instrumental variable,

Further development: simultaneous estimation of
is obtained with r i = y i −̂x i . It means that is obtained as the single regression parameter. Then, new ̂ will be obtained using ̂ , and new ̂ will also be estimated based on the latest ̂ . See Wada, Takata and Tsubaki (2019) for more detail of the algorithm. The implemented function named RBred is available with its non-robust version called Bred (see Appendix). An evaluation was made using contaminated random datasets. Estimation by RBred was found to have better efficiency than the R optim function; in particular, its estimation of appears to be useful for imputation. However, further evaluation may be necessary, especially regarding the estimation of , although the value of is not necessary for imputation.

Weight calibration
The outliers focused on in the estimation and formatting step are extreme values with large design weights. The Horvitz-Thompson estimator is widely used to estimate finite population means and totals for conventional statistical surveys. In such cases, design weights, which are the inverse of the sampling rate, are used as multipliers for each observation. The problem lies in deciding whether an extreme observation deserves the corresponding design weight. (A weight of 1000 applied to an observation means that the value of the observation represents 1000 population elements that were not sampled.) Chambers (1986) considers this outlier problem and argues that for "nonrepresentative" outliers or unique data points that have been judged free of any errors or mistakes, the design weight should be one corresponding to a single population element. This implies that these outliers do not represent other population elements that are not sampled, and consequently, they do not influence the estimation process in any substantial way. Wada and Tsubaki (2018) propose a design weight calibration method utilizing the robust weights obtained by the M-estimators for regression described in Sect. 3. Henry and Valliant (2012) classified estimation methods for population means or totals in sample surveys as model-based approaches, design-based approaches, and model-assisted approaches. The proposed method corresponds to the latter: the model-assisted approach.
To illustrate, consider selecting a sample using random sampling without replacement from finite population U containing N elements u l , l = 1, … , N. The extracted sample S , contains n elements v i , i = 1, … , n. Let = n∕N be the probability that a population element is included in the sample S . The associated design weight for a sampled element i in S is g i = 1∕ . Therefore, ∑ n i=1 g i = N. The Horvitz-Thompson (HT) estimator (Horvitz and Thompson 1952) for population total T = ∑ N l=1 u l in this case is This is also called the inverse probability weight (IPW) estimator.
It is known that the efficiency of an HT-estimator decreases with the presence of outliers or when applied to non-normal data distribution, as HT-estimators have characteristics similar to OLS estimators. To improve the efficiency of the HT-estimator, Wada and Tsubaki (2018) use robust regression estimation. The idea is to adjust the conventional design weight g i by multiplying it by the robust weight, w i , obtained by (3) after the iterations converge, since w i determined by residual can be regarded as an indicator of "outlyingness." Additional adjustment is necessary to make the sum of the adjusted weight g * i = g i w i meet the necessary condition that ∑ n i=1 g * i = N . The following two adjustments have been proposed: The adjustment shown in (12) is a natural form; however, the adjusted weight, g * * i , becomes zero when w i = 0 . In such cases, the corresponding observation is actually removed from the population estimation process. However, for official statistics, ignoring a sampled observation is not desirable, since the observation exists in the population. For this reason, the adjustment shown in (13), which guarantees a minimum value of 1 for each g * * * i , is proposed. Design weight calibration by robust weight has an advantage in that the reduction of estimation efficiency is less than the reduction in model-based approaches when the data distribution deviates from the applied model. Wada and Tsubaki (2018) confirm the usefulness of the proposed adjustment (13) in Monte Carlo simulations with random and real datasets.
One disadvantage of this weight calibration method may be that the weight is assigned to a variable in observation, while in conventional approaches, the design weight is assigned to the observation (i.e., all variables in observation are assigned the same design weight).

MSD estimators for unincorporated enterprise survey
Unincorporated Enterprise Survey, conducted by the Statistics Bureau of Japan, Ministry of Internal Affairs and Communications (MIC), had major changes in 2019. Industries surveyed are extended, the sample size is increased ten times, i.e., approximately 37 thousand samples from 3.7 thousand, and questionnaires are collected by mail instead of enumerators. A process of a hot deck imputation is added for accounting items such as sales, total expenses, purchases, operating expenses, and inventories together with a cleaning process of the hot deck donor candidates. It had not been necessary since the response rate of the previous survey was almost 100%; however, mail surveys are typically expected to increase non-responses.
Wada et al. (2020) evaluate outlier detection methods using a mean vector and covariance matrix assume symmetric data distributions. The blocked adaptive computationally efficient outlier nominators (BACON) by Billor et al. (2000), improved MSD by Béguin and Hulliger (2003), Fast-MCD by Rousseeuw and Driessen (1999), and NNVE by Wang and Raftery (2002) are compared using skewed and long-tailed random datasets with asymmetrical contamination, and improved MSD is selected. They also examine an appropriate data transformation for the highly skewed target variables based on the number of outliers detected and scatter plot matrices. Their target variables are highly correlated accounting items that do not have values less than zero, and expected outliers have mostly large values. Therefore, the suitable data transformation, in this case, could be slightly loose than the one which makes the data strictly symmetric. They select the data transformation, which detects a minimum number of outliers in larger values and shows that removing outliers from hot deck donor candidates improves estimation for imputation. Log transformation, biquadratic root transformation, and square root transformation are compared, and biquadratic root transformation is selected. The lower triangular matrix of Fig. 3 shows an example of the manufacturing industry. Based on their results, outliers regarding highly correlated four variables (sales, total expenses, purchases, and operating expenses) of Unincorporated Enterprise Survey are detected by improved MSD after biquadratic root transformation. Beginning inventory and Ending inventory are excluded since there are certain amounts of observation with zero values in some industries, and they do not have high covariance with other variables, although these two variables are highly correlated with each other. Those outliers are removed from hot deck donor candidates in each imputation class, while they are used in aggregation for producing statistical tables.

Application of the robust estimator of the generalized ratio model
The robust estimator of the generalized ratio model (9) is adopted for the imputation of major corporate accounting items in the 2016 Economic Census for Business Activity in Japan (Wada and Sakashita 2017;Wada et al. 2021), conducted by the Ministry of Internal Affairs and Communications (MIC) and the Ministry of Economy, Trade and Industry (METI) jointly. The items to be imputed are sales explained by expenses, expenses by sales, and salaries by expenses. The model (9) with = 1∕2 and = 1 are compared, and = 1∕2 is adopted for all of the imputed items.
The imputation class is determined by CART (classification and regression trees). The target variable is the ratio for imputation, and the possible explanatory variables are the 3.5-digit industrial classification code, legal organization, number of employees, type of cooperative, number of regular domestic employees, number of domestic branch offices, and number of branch offices. They are the variables available from Statistical Business Register before the 2016 Census since the imputation class has to be determined before collecting questionnaires. Statistical Business Register is a database on business establishments and enterprises across the country made from the previous Census and surveys as well as various administrative 1 3 information. Among those explanatory variables, type of cooperative and number of regular domestic employees are adopted. The minimum number of complete observations for parameter estimation in each imputation class is 30. In case if an imputation class does not have enough observations, the class is merged with another class within the same 1-digit industrial classification code. If there are plural choices, the merged class is determined by the Mann-Whitney U test.

Concluding remarks
The focus of this paper is controlling the influence of outliers in the survey data processing. In addition to conventional univariate methods, some of the multivariate methods introduced here have come to be used in practice, although examples of their use remain limited for the time being. Other methods are still in the research stage. For example, the simultaneous estimation of and described in Sect. 4.3 is in the process of improvement, and validation of the weight calibration approach in Sect. 5 may require more time since weighting by item in each observation (rather than by observation) is not yet common practice.
The author hopes that this paper will promote efficiency improvements in official statistics, in terms of both estimation and statistical production.