Reliable Computing

, Volume 9, Issue 6, pp 419–440

# Computation of Bounds on Population Parameters When the Data Are Incomplete

• Joel L. Horowitz
• Charles F. Manski
• Maria Ponomareva
• Jörg Stoye
Article

DOI: 10.1023/A:1025865520086

Horowitz, J.L., Manski, C.F., Ponomareva, M. et al. Reliable Computing (2003) 9: 419. doi:10.1023/A:1025865520086

## Abstract

This paper continues our research on the identification and estimation of statistical functionals when the sampling process produces incomplete data due to missing observations or interval measurement of variables. Incomplete data usually cause population parameters of interest in applications to be unidentified except under untestable and often controversial assumptions. However, it is often possible to identify sharp bounds on these parameters. The bounds are functionals of the population distribution of the available data and do not rely on untestable assumptions about the process through which data become incomplete. They contain all logically possible values of the population parameters. Moreover, every parameter value within the bounds is consistent with some model of the process that generates incomplete data. The bounds can be estimated consistently by replacing the population distribution of the data with the empirical distribution in the functionals that give the bounds. In practice, this is straightforward in some circumstances but computationally burdensome in others; in general, the bounds are the solutions to non-convex mathematical programming problems that can be difficult to solve. Horowitz and Manski (Censoring of Outcomes and Regressors Due to Survey Nonresponse: Identification and Estimation Using Weights and Imputations, Journal of Econometrics84 (1998), pp. 37–58; Nonparametric Analysis of Randomized Experiments with Missing Covariate and Outcome Data, Journal of the American Statistical Association95 (2000), pp. 77–84) studied nonparametric mean regression with missing data. In this paper, we first describe the general problem. We then present new findings on the computation of bounds on best linear predictors under square loss. We describe a genetic algorithm to compute sharp bounds and a min-imax approach yielding simple but non-sharp outer bounds. We use actual data to demonstrate the computations.