Uncertainty Bounds in Parameter Estimation with Limited Data

  • James C. Spall
Part of the International Series in Operations Research & Management Science book series (ISOR, volume 46)


Consider the problem of determining uncertainty bounds for parameter estimates with a small sample size of data. Calculating uncertainty bounds requires information about the distribution of the estimate. Although many common parameter estimation methods (e.g., maximum likelihood, least squares, maximum a posteriori, etc.) have an asymptotic normal distribution, very little is usually known about the finite-sample distribution. This paper presents a method for characterizing the distribution of an estimate when the sample size is small. The approach works by comparing the actual (unknown) distribution of the estimate with an “idealized” (known) distribution. Some discussion and analysis are included that compare the approach here with the well-known bootstrap and saddlepoint methods. Example applications of the approach are presented in the areas of signal-plus-noise modeling, nonlinear regression, and time series correlation analysis. The signal-plus-noise problem is treated in greatest deta il; this problem arises in many contexts, including state-space modeling, the problem of combining several independent estimates, and quantile calculation for projectile accuracy analysis.

Key Words

Small sample parameter estimation system identification uncertainty regions Mestimates signal-plus-noise nonlinear regression time series correlation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Anderson, T. W. (1971). The Statistical Analysis of Time Series, Wiley, New York.zbMATHGoogle Scholar
  2. Bickel, P. J., and K. A. Doksum (1977). Mathematical Statistics, Holden-Day, Oakland, CA.zbMATHGoogle Scholar
  3. Bisgaard, S. and B. Ankenman (1996). Standard errors for the eigenvalues in second-order response surface models. Technometrics, 38:238–246.MathSciNetzbMATHGoogle Scholar
  4. Chen, Z. and K.-A. Do (1994). The bootstrap method with saddlepoint approximations and importance sampling. Statistica Sinica, 4:407–421.MathSciNetzbMATHGoogle Scholar
  5. Chesher, A. (1991). The effect of measurement error. Biometrika, 78:451–462.MathSciNetzbMATHGoogle Scholar
  6. Cook, P. (1988). Small-sample Bayesian frequency-domain analysis of autoregressive models. J. C. Spall, editor, Bayesian Analysis of Time Series and Dynamic Models, pp. 101–126. Marcel Dekker, New York.Google Scholar
  7. Daniels, H. E. (1954). Saddlepoint approximations in statistics. Annals of Mathematical Statistics, 25:631–650.MathSciNetzbMATHGoogle Scholar
  8. Davison, A. C., and D. V. Hinkley (1988). Saddlepoint approximations in resampling methods. Biometrika, 75:417–431.MathSciNetzbMATHGoogle Scholar
  9. Efron, B., and R. Tibshiran (1996). Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy (with discussion). Statistical Science, 1:54–77.Google Scholar
  10. Field, C, and E. Ronchetti (1990). Small Sample Asymptotics. IMS Lecture Notes-Monograph Series (vol. 13). Institute of Mathematical Statistics, Hayward, CA.zbMATHGoogle Scholar
  11. Fraser, D.A.S., and N. Reid (1993). Third-order asymptotic models: Likelihood functions leading to accurate approximations for distribution functions. Statistica Sinica, 3: 67–82.MathSciNetzbMATHGoogle Scholar
  12. Ghosh, J. K. (1994). Higher Order Asymptotics. NSF-CBMS Regional Conference Series in Probability and Statistics, Volume 4. Institute of Mathematical Statistics, Hayward, CA.zbMATHGoogle Scholar
  13. Ghosh, M., and J. N. K. Rao (1994). Small area estimation: An approach (with discussion). Statistical Science, 9:55–93.MathSciNetGoogle Scholar
  14. Goutis, C., and G. Casella (1999). Explaining the saddlepoint approximation. American Statistician, 53:216–224.MathSciNetGoogle Scholar
  15. Hall, P. (1992). The Bootstrap and the Edgeworth Expansion. Springer-Verlag, New York.zbMATHGoogle Scholar
  16. Hjorth, J. S. U. (1994). Computer Intensive Statistical Methods. Chapman and Hall, London.zbMATHGoogle Scholar
  17. Hoadley, B. (1971). Asymptotic properties of maximum likelihood estimates for the independent not identically distributed case. Annals of Mathematical Statistics, 42:1977–1991.MathSciNetzbMATHGoogle Scholar
  18. Hui, S. L, and J. O. Berger (1983). Empirical Bayes estimation of rates in longitudinal studies. Journal of the American Statistical Association, 78:753–760.zbMATHGoogle Scholar
  19. Huzurbazar, S. (1999). Practical saddlepoint approximations. American Statistician, 53:225–232.Google Scholar
  20. James, A. T., and W. N. Venables (1993). Matrix weighting of several regression coefficient vectors. Annals of Statistics, 21:1093–1114.MathSciNetzbMATHGoogle Scholar
  21. Joshi, S. S., H. D. Sherali, and J. D. Tew (1994). An enhanced RSM algorithm using gradient-deflection and second-order search strategies. J. D. Tew et al., editors, Proceedings of the Winter Simulation Conference, pp. 297–304.Google Scholar
  22. Kmenta, J. (1971). Elements of Econometrics. Macmillan, New York.zbMATHGoogle Scholar
  23. Kolassa, J. E. (1991). Saddlepoint approximations in the case of intractable cumulant generating functions. Selected Proceedings of the Sheffield Symposium on Applied Probability. IMS Lecture Notes-Monograph Series (vol. 18). Institute of Mathematical Statistics, pp. 236–255MathSciNetzbMATHGoogle Scholar
  24. Hayward, CA. Laha, R. G., and V. K. Rohatgi (1979). Probability Theory. Wiley, New York.zbMATHGoogle Scholar
  25. Lieberman, O. (1994). On the approximation of saddlepoint expansions in statistics. Econometric Theory, 10:900–916.MathSciNetGoogle Scholar
  26. Ljung, L. (1978). Convergence analysis of parametric identification methods. IEEE Transactions on Automatic Control, AC-23:770–783.MathSciNetzbMATHGoogle Scholar
  27. Lunneborg, C. E. (2000). Data Analysis by Resampling: Concepts and Applications. Duxbury, Pacific Grove, CA.Google Scholar
  28. National Research Council (1992). Combining Information: Statistical Issues and Opportunities for Research. National Academy of Sciences, Washington, DC.Google Scholar
  29. Nicholson, W. (1978). Microeconomic Theory. iDryden, Hinsdale, IL.Google Scholar
  30. Pazman, A. (1990). Small-sample distributional properties of nonlinear regression estimators (a geometric approach) (with discussion). Statistics, 21:323–367.MathSciNetzbMATHGoogle Scholar
  31. Rao, C. R. (1973). Linear Statistical Inference and its Applications. Wiley, New York.zbMATHGoogle Scholar
  32. Rao, P. S. R. S., J. Kaplan, and W. G. Cochran (1981). Estimators for the one-way random effects model with unequal error variances. Journal of the American Statistical Association, 76:89–97.MathSciNetzbMATHGoogle Scholar
  33. Reid, N. (1988). Saddlepoint methods and statistical inference (with discussion). Statistical Science, 3:213–238.MathSciNetzbMATHGoogle Scholar
  34. Rutherford, B., and S. Yakowitz (1991). Error inference for nonparametric regression. Annals of the Institute of Statistical Mathematics, 43:115–129.MathSciNetzbMATHGoogle Scholar
  35. SAS Institute (1990). SAS/GRAPH Software: Reference, Version 6, First Edition. SAS Institute, Cary, NC.Google Scholar
  36. Serfling, R. J. (1980). Approximation Theorems of Mathematical Statistics. Wiley, New York.zbMATHGoogle Scholar
  37. Shnidman, D. A. (1995). Efficient computation of the circular error probable (CEP) integral. IEEE Transactions on Automatic Control, 40:1472–1474.MathSciNetzbMATHGoogle Scholar
  38. Shumway, R. H., D. E. Olsen, and L. J. Levy (1981). Estimation and tests of hypotheses for the initial mean and covariance in the Kalman filter model. Communications in Statistics-Theory and Methods, 10:1625–1641.MathSciNetzbMATHGoogle Scholar
  39. Smith, R. H. (1985). Maximum likelihood mean and covariance matrix estimation con-strained to general positive semi-definiteness. Communications in Statistics-Theory and Methods, 14: 2163–2180.MathSciNetzbMATHGoogle Scholar
  40. Spall, J. C. and D. C. Chin (1990). First-order data sensitivity measures with applications to a multivariate signal-plus-noise problem. Computational Statistics and Data Analysis, 9:297–307.zbMATHGoogle Scholar
  41. Spall, J. C. and J. L. Maryak (1992). A feasible Bayesian estimator of quantiles for projectile accuracy from non-i.i.d. data. Journal of the American Statistical Association, 87:676–681.Google Scholar
  42. Spall, J. C. (1995). Uncertainty bounds for parameter identification with small sample sizes. Proceedings of the IEEE Conference on Decision and Control, pp. 3504–3515.Google Scholar
  43. Sun, F. K. (1982). A maximum likelihood algorithm for the mean and covariance of nonidentically distributed observations. IEEE Transactions on Automatic Control, AC-27:245–247.MathSciNetzbMATHGoogle Scholar
  44. Wilks, S. S. (1962). Mathematical Statistics. Wiley, New York.zbMATHGoogle Scholar
  45. Walter, E., and L. Pronzato (1990). Qualitative and quantitative experiment design for phenomenological models-A survey. Automatica, 26:195–213.MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Science + Business Media, Inc. 2002

Authors and Affiliations

  • James C. Spall
    • 1
  1. 1.The Johns Hopkins UniversityApplied Physics LaboratoryLaurel

Personalised recommendations