Introduction

Missing data are often encountered for various reasons in biomedical research and present challenges for data analysis. It is well known that inadequate handling of missing data may lead to biased estimation and inference. A number of statistical methods have been developed for handling missing data. Largely due to its ease of use, multiple imputation (MI)1,2 has been arguably the most popular method for handling missing data in practice. The basic idea underlying MI is to replace each missing data point with a set of values generated from its predictive distribution given observed data and to generate multiply imputed datasets to account for uncertainty of imputation. Each imputed data set is then analyzed separately using standard complete-data analysis methods and the results are combined across all imputed data sets using Rubin’s rule1,2. MI can be readily conducted using available software packages3,4,5 in a wide range of situations and has been investigated extensively in many settings6,7,8,9,10,11,12. Most of the existing MI methods rely on the assumption of missingness at random (MAR)2, i.e., missingness only depends on observed data; our current work also focuses on MAR. In recent years, the amount of data has increased considerably in many applications such as omic data and electronic health record data. In particular, the high dimensions in omic data may cause serious problems to MI in terms of applicability and accuracy. In what follows, we first describe some challenges of MI in the presence of high-dimensional data and explain why regularized regressions are suitable in this setting, and then review existing MI methods for general missing data patterns and propose their extensions for high-dimensional data.

Advances in technologies have led to collection of high-dimensional data such as omics data in many biomedical studies where the number of variables is very large and missing data are often present. Such high-dimensional data present unique challenges to MI. When conducting MI, Meng13 suggested imputation models be as general as data allow them to be, in order to accommodate a wide range of statistical analyses that may be conducted using multiply imputed data sets. However, in the presence of high-dimensional data, it is often infeasible to include all variables in an imputation model. As such, machine learning and model trimming techniques have been used in building imputation models in these settings. Stekhoven et al.14 proposed a random forest-based algorithm for missing data imputation called missForest. Random forest utilizes bootstrap aggregation of multiple regression trees to reduce the risk of overfitting, and combines the predictions from trees to improve accuracy of predictions15. Shah et al.16 suggested a variant of missForest and compared it to parametric imputation methods. They showed that their proposed random forest imputation method was more efficient and produced narrower confidence intervals than standard MI methods. Liao et al.17 developed four variations of K-nearest-neighbor (KNN) imputation methods. However, these methods are improper in the sense of Rubin (1987)1 since they do not adequately account for the uncertainty of estimating parameters in the imputation models. Improper imputation may lead to biased parameter estimates and inference in subsequent analyses. In addition, KNN methods are known to suffer from the curse of dimensionality18,19 and hence may not be suitable for high-dimensional data. Apart from random forest and KNN, regularized regression, which allows for simultaneous parameter estimation and variable selection, presents another option for building imputation models in the presence of high-dimensional data. The basic idea of regularized regression is to minimize the loss function of a regression, subject to some penalties. Different penalty specifications give rise to various regularized regression methods. Zhao and Long (2013)20 investigated the use of regularized regression for MI including lasso21, elastic net22 (EN), and adaptive lasso23 (Alasso). They also developed MI using a Bayesian lasso approach. However, they focused on the setting where only one variable has missing values. There has been limited work on MI methods for general missing data patterns where multiple variables have missing values in the presence of high-dimensional data.

To handle general missing data patterns, there are two MI approaches, one based on joint modeling (JM)24 and the other based on fully conditional specifications, the latter of which is also known as multiple imputation by chained equations (MICE) and has been implemented independently by van Buuren et al. (2011)3 and Raghunathan et al. (1996)25. While JM has strong theoretical justifications and works reasonably well for low-dimensional data, its performance deteriorates as the data dimension increases26 and it is difficult to extend to high-dimensional data. MICE involves specifying a set of univariate imputation models. Since each imputation model is specified for one partially observed variable conditional on the other variables, it simplifies the modeling process. While MICE lacks theoretical justifications except for some special cases27,28, it has been shown to achieve satisfactory performance in extensive numerical studies and empirical examples. White et al. (2011)29 provides a nice review and guidance for MICE. It is worth mentioning that standard MICE methods cannot handle high-dimensional data. For example, the MICE algorithms proposed by van Buuren et al.3 and Su et al.5 cannot handle the prostate cancer data used in our data analysis and the high-dimensional data generated in our simulations. As such, we focus on extending MICE to high-dimensional data settings for handling general missing data patterns.

Methodology

Suppose that our data set Z has p variables, z1, …, zp. Without loss of generality, we assume that the first l (l ≤ p) variables contain missing values. Suppose the data consist of n observations and we have rj observed values in variable zj. We denote the observed components and missing components for variable j by zj,obs and zj,mis. Let be the collection of the p − 1 variables in Z except zj. Let zj,obs and zj,mis denote the two components of zj corresponding to the complement data of zj,obs and zj,mis.

Multiple imputation by chained equations

Let the hypothetically complete data Z be a partially observed random draw from a multivariate distribution . We assume that the multivariate distribution of Z is completely specified by the unknown parameters θ . The standard MICE algorithm obtains a posterior distribution of θ by sampling iteratively from conditional distributions of the form . Note that the parameters θ1, …, θl are specific to the conditional densities, which might not determine the unique ‘true’ joint distribution .

To be specific, MICE starts with a simple imputation, such as imputing the mean, for every missing value in the data set. Initial values are denoted by . Then given values at iteration , for variable , new parameter estimates of the next iteration are generated from through a regression model. Then the missing values for are replaced with predicted values from the regression model with model parameter . Note that when is subsequently used as a predictor in the regression model for other variables that have missing values, both the observed and predicted values are used. These steps are repeated for each variable with missing values, that is, z1 to zl. Each iteration entails cycling through imputing z1 to zl. At the end of each iteration, all missing values are replaced by the predictions from regression models that expose the relationships observed in the data. We then repeat the procedures iteratively until convergence. The complete algorithm can be described as follows:

Note that while the observed data zobs do not change in the iterative updating procedure, the missing data zmis do change from one iteration to another. After convergence, the last imputed data sets after appropriate thinning are chosen for subsequent standard complete-data analysis.

In the case of high-dimensional data, where p > rj or p ≈ rj, it is not feasible to fit the imputation model (1) using traditional regressions. In the following two subsections, we provide details of two approaches to apply regularized regression techniques in the presence of high-dimensional data for general missing data patterns.

Direct use of regularized regression for multiple imputation

For variable zj, our goal is to fit the imputation model (1) using rj cases with observed zj. Assuming that qj variables in zj,obs are associated with zj,obs, we denote this set of variables by , which is also known as the true active set. We define the subset of predictors that are selected to impute as the active set by , and denote the corresponding design matrix as .We first consider an approach where a regularization method is used to conduct both model trimming and parameter estimation and a bootstrap step is incorporated to simulate random draws from . This approach is referred to as MICE through the direct use of regularized regression (MICE-DURR). The purpose of the boostrap is to accommodate sampling variation in estimating population regression parameters, which is part of ensuring that imputations are proper16. In the -th iteration and for variable , (j = 1, …, l), define . Denote by the component of corresponding to . The algorithm can be described as follows:

  1. 1

    Generate a bootstrap data set of size by randomly drawing observations from with replacement. Denote the observed values of by and the corresponding component of by .

  2. 2

    Regarding as the outcome and as predictors, use a regularized regression method to fit the model and obtain . Note that is considered a random draw from .

  3. 3

    Predict with by drawing randomly from the predictive distribution , noting that imputation is conducted on the original data set , not the bootstrap data set .

We conduct the above procedure for variables that have missing values in one iteration and repeat iteratively to obtain imputed data sets. Subsequently, standard complete-data analysis can be applied to each one of the imputed data sets.

We make our approach clear by linking the above three steps to the MICE algorithm. In the first step, we bootstrap the data from the last iteration to ensure that the following imputations are proper. In the second step, we use regularized regressions to fit model (1) and obtain an estimate of θj. Then, we use this estimate to predict the missing values from the model (2). Details of MICE-DURR for three types of data can be found as Supplementary Method S1 online.

Indirect use of regularized regression for multiple imputation

MICE-DURR uses regularized regression for both model trimming and parameter estimation. An alternative approach to MICE-DURR is to use a regularization method for model trimming only and then followed by a standard multiple imputation procedure using the estimated active set (), say, through a maximum likelihood inference procedure. We refer to this approach as MICE through the indirect use of regularized regression (MICE-IURR). Suppose is defined as above. Denote by the component of corresponding to . At the -th iteration and for variable , the algorithm of the MICE-IURR approach is as follows:

  1. 1

    We use a regularized regression method to fit a multiple linear regression model regarding as the outcome variable and as the predictor variable, and identify the active set, . Let denote the subset of that only contains the active set. Correspondingly, denote two components of by and .

  2. 2

    Approximate the distribution of by using a standard inference procedure such as maximum likelihood.

  3. 3

    Predict : randomly draw from and subsequently predict with by drawing randomly from the predictive distribution .

These three steps are conducted iteratively until convergence. We obtain the last imputed data sets for the following analyses. In the third step, instead of fixing one for all iterations, we randomly draw from the distribution and use it to predict at each iteration. This strategy can guarantee that our imputations are proper30. Details of MICE-IURR for three types of data can be found as Supplementary Method S2 online.

Simulation Studies

Extensive simulations are conducted to evaluate the performance of the two proposed methods MICE-DURR and MICE-IURR in comparison with the standard MICE and several other existing methods under general missing data patterns. For MICE-DURR and MICE-IURR, we consider three regularization methods, namely, lasso, EN and Alasso. We summarize the simulation results over 200 Monte Carlo (MC) data sets.

The setup of the simulations is similar to what was used in Zhao and Long (2013). Specifically, each MC data set has a sample size of and includes , the fully observed outcome variable, and , the set of predictors and auxiliary variables. We consider settings with and . We consider , , and having missing values, which follow a general missing data pattern. We first generate from a multivariate normal distribution with mean and a first order autoregressive covariance matrix with autocorrelation varying as 0, 0.1, 0.5, and 0.9. Given , variables , , and are generated independently from a normal distribution , where represents the common true active set with a cardinality of for all variables with missing values. We further consider settings where and 20, and for ; for . For and 20, the corresponding true active set and . Given Z, the outcome variable y is generated from , where , and is random noise and independent of . Missing values are created in , , and using the following logit models for the corresponding missing indicators, , , and , , , and , resulting in approximately 40% of observations having missing values.

We compare our proposed MICE-DURR and MICE-IURR with the random forest imputation method (MICE-RF)16 and two KNN imputation methods17, one by the nearest variables (KNN-V) and the other by the nearest subjects (KNN-S). When applying MICE-RF, KNN-V, and KNN-S, the corresponding R packages returned errors when the incomplete dataset contains large number of variables (i.e. ). As a result, these three methods are only applied to the setting of . Since the standard MI method as implemented in the R package mice is not directly applicable to the setting of , we consider a standard MI approach that uses the true active set plus y to impute and , denoted by MI-true. Of note, MI-true is not applicable in practice since the true active set is in general unknown.

Following Shah et al.16, 10 imputed datasets are generated using each MI method; then a linear regression model is fitted to regress y on () in each imputed data set and Rubin’s rule is applied to obtain and their SEs. Consistent with the recommendations in the literature3,29, we find in our numerical studies that imputed values using all the MI methods are fairly stable after 10 iterations and hence fix the number of iterations to 20. To benchmark bias and loss of efficiency in parameter estimation, two additional approaches that do not involve imputations are also included: a gold standard (GS) method that uses the underlying complete data before missing data are generated, and a complete-case analysis (CC) method that uses only complete-cases for which all the variables are observed2. We calculate the following measures to summarize the simulation results for , , and : mean bias, mean standard error (SE), Monte Carlo standard deviation (SD), mean square error (MSE) and coverage rate of the 95% confidence interval (CR).

Tables 1, 2, 3 summarize the results for , and , respectively. Within each table different methods are compared and the effects of the cardinality of the true active set and dimension are evaluated with the correlation fixed. In all scenarios, GS and MI-true, neither of which is applicable in real data, lead to negligible bias and their CRs are close to the nominal level, whereas the complete-case analysis and the existing MI methods including MICE-RF, KNN-V and KNN-S lead to substantial bias. In particular, MICE-RF, with a large bias, tends to obtain a large coverage rate close to 1. KNN-V and KNN-S, on the other hand, impute the missing values only once and exhibit under-coverage of 95% CI, likely a result of improper imputation. MICE-DURR performs poorly with substantial bias in our settings with general missing data patterns. Of note, MICE-DURR was shown in Zhao and Long (2013)20 to improve the accuracy of the estimate in the simulation settings where only one variable has missing values. In comparison, the MICE-IURR approach achieves better performance–in terms of bias–than the other imputation methods except for MI-true. In all settings, the MICE-IURR method using lasso or EN exhibits small to negligible bias, similar to MI-true. When , the biases and MSEs for MICE-RF, KNN-V, and MICE-DURR decrease as increases, whereas the performance of KNN-S deteriorates. The MICE-IURR methods tend to give fairly stable results as changes. When and are fixed, the results of MICE-DURR and MICE-IURR with are very similar compared with the results with .

Table 1 Simulation results for estimating β1 = β2 = β3 = 1 in the presence of missing data based on 200 monte carlo data sets, where n = 100 and ρ = 0.1.
Table 2 Simulation results for estimating β1 = β2 = β3 = 1 in the presence of missing data based on 200 monte carlo data sets, where n = 100 and ρ = 0.5.
Table 3 Simulation results for estimating β1 = β2 = β3 = 1 in the presence of missing data based on 200 monte carlo data sets, where n = 100 and ρ = 0.9.

Compared with Tables 1, 2, 3 show similar patterns in terms of comparisons between the imputation methods. Among the three MICE-IURR algorithms, Alasso tends to underperform lasso and EN when (Table 1), but not so when and (Tables 2 and 3). In addition, when , the biases and MSEs decrease for MICE-IURR using lasso and EN and increase for MICE-IURR using Alasso, as increases from 200–1000.

Data Examples

We illustrate the proposed methods using two data examples.

Georgia stroke registry data

Stroke is the fifth leading cause of death in the United States and a major cause of severe long-term disability. The Georgia Coverdell Acute Stroke Registry (GCASR) program is funded by Centers for Disease Control Paul S. Coverdell National Acute Stroke Registry cooperative agreement to improve the care of acute stroke patients in the pre-hospital and hospital settings. In late 2005, 26 hospitals initially participated in GCASR program and this number increased to 66 in 2013, which covered nearly 80% of acute stroke admissions in Georgia. Intravenous (IV) tissue-plasminogen activator (tPA) improves the outcomes of acute ischemic stroke patients, and brain imaging is a critical step in determining the use of IV tPA. Time plays a significant role in determining patients’ eligibility for IV tPA and their prognosis. The American Heart and American Stroke Association and CDC set a goal that hospitals should complete imaging within 25 minutes of patients arrival to a hospital. The objective of this study, thus, is to identify the factors that might be associated with hospital arrival-to-imaging time. GCASR collected data on 86,322 clinically diagnosed acute stroke admissions between 2005 and 2013. The registry has 203 data elements of which 121 (60%) have missing values, attributed to lack of answers, service not provided, poor documentation and data abstraction or ineligibility of a patient to a specific care. The extent of missingness varies from 0.01–28.72%.

In this analysis, we consider arrival-to-CT time the outcome and the other 13 variables the predictors. These 13 variables of interest can be classified into two categories: patient-related variables such as age, gender, health insurance, and medical history; pre-hospital-related variables such as EMS notification. Only gender, age and race are fully observed among 13 variables. A CC analysis is conducted which uses only 15% of the original subjects after the removal of incomplete cases. In addition, MI methods are also used. We first remove variables that have missing rate greater than 40% and the remaining variables are used to impute the missing values of partially observed variables that are of interest. After imputations, each imputed datasets of 86,322 subjects are used to fit the regression models separately and results are combined by Robin’s rules. We use a straightforward and popular strategy to handle skip pattern: first treat skipped item as missing data and impute them along with other real missing values, then restore the imputed values for skipped items back to skips in the imputed data sets to preserve skip patterns. We apply five MI methods, namely, the MICE method proposed by van Buuren et al.3 (mice), the MI method proposed by Su et al.5 (mi), the random forest MICE method proposed by Shah et al.16 (MICE-RF), and our MICE-DURR and MICE-IURR methods. When applying KNN-V and KNN-S, the R software returned errors. Thus, KNN-V and KNN-S are not included in this data example.

Table 4 provides the results from our data analyses. In the CC analysis, only NIH stroke score and race are shown to be associated with the arrival-to-CT time. The results from all five MI methods are similar in terms of the p-value and the direction of the association. By comparison, while only 2 variables are shown to be statistically significant in the CC analysis, this number increases to 11, 11, 10, 9 and 9 for mi, mice, MICE-RF, MICE-DURR, and MICE-IURR, respectively. For example, after adjusting for other variables, the mean arrival-to-CT time in patients that arrive during the day time (Day) was 18.4 minutes shorter than that in patients arriving at night () based on MICE-IURR imputation. Health insurance and three variables about history of diseases become statistically significant after we apply the MI methods. However, NIH stroke score and race, which are shown to be statistically significant by CC analysis, turn out to be not significant by MICE-DURR and MICE-IURR.

Table 4 Regression coefficients estimates of the Georgia stroke registry data.

Prostate cancer data

The second data set is from a prostate cancer study (GEO GDS3289). It contains 99 samples, including 34 benign epithelium samples and 65 non-benign samples, with 20,000 genomic biomarkers. Missing values are present for 17,893 biomarkers, nearly 89% of all genomic biomarkers in this data set. In this analysis, we consider a binary outcome , defined as if it is a benign sample and if otherwise, and test whether some genomic biomarkers are associated with the outcome. For the purpose of illustration, we choose three biomarkers (FAM178A, IMAGE:813259 and UGP2), for which the missing rates are 31.3%, 45.5% and 26.3%, respectively. We conduct a logistic regression of on the three biomarkers. In this analysis, mi and mice packages give error messages and MICE-RF approach is computationally very expensive. Therefore, we only use our two proposed MI methods (MICE-DURR and MICE-IURR) and the KNN-V and KNN-S methods in addition to the complete-case analysis. All 2107 biomarkers that do not have missing values are used to impute missing values in the three biomarkers.

Table 5 presents the results on logistic regression for the prostate cancer data. Based on our results, all three biomarkers become statistically significant after using our multiple imputation methods, except in one case that the p-value of UGP2 after MICE-DURR method is slightly larger than 0.05. In addition, in most cases, the estimates and p-values by MICE-DURR are consistent with those results by MICE-IURR. For example, the regression coefficients of biomarker (IMAGE:813259) after using two different multiple imputations (MICE-DURR and MICE-IURR) are 3.47 and 3.50, with p-values of 0.031 and 0.039, respectively.

Table 5 Regression coefficients estimates of the prostate cancer data.

Discussion

We investigate two approaches for multiple imputation for general missing data patterns in the presence of high-dimensional data. Our numerical results demonstrate that the MICE-IURR approach performs better than the other imputation methods considered in terms of bias, whereas the MICE-DURR approach exhibits large bias and MSE. Of note, while MICE-RF leads to substantial bias in subsequent analysis of imputed data sets, it tends to yield smaller MSE than MICE-IURR due to smaller SD. In the case of comparing multiple imputation methods, it can be argued when one imputation method leads to substantial bias and hence incorrect inference in subsequent analysis of imputed data sets then whether this method yields smaller MSE may not be very relevant. Two data examples are used to further showcase the limitations of the existing imputation methods considered.

As alluded to earlier, while MICE is a flexible approach for handling different data types, its theoretical properties are not well-established. The specification of a set of conditional regression models may not be compatible with a joint distribution of the variables being imputed. Liu et al. (2013)27 established technical conditions for the convergence of the sequential conditional regression approach if the stationary joint distribution exists, which, however, may not happen in practice. Zhu and Raghunathan (2014)28 assessed theoretical properties of MI for both compatible and incompatible sequences of conditional regression models. However, their results are established for the missing data pattern where each subject may have missing values in at most one variable. One direction for future work is to extend these results to the settings of our interest.

Additional Information

How to cite this article: Deng, Y. et al. Multiple Imputation for General Missing Data Patterns in the Presence of High-dimensional Data. Sci. Rep. 6, 21689; doi: 10.1038/srep21689 (2016).