Skip to main content

Asymptotic Properties of M, ML, and Maximum A Posteriori Estimators

  • Chapter
  • First Online:
Design of Experiments in Nonlinear Models

Part of the book series: Lecture Notes in Statistics ((LNS,volume 212))

Abstract

We consider the regression model (3.2) where the errors \(\varepsilon _{i}\) are independently distributed, \(\varepsilon _{i}\) having the p.d.f.\(\bar{\varphi }_{x_{i}}(\cdot )\).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A more standard condition, used in more general situations than regression models, is that the support of the density of the observations should not depend on the value \(\theta\) of the parameters in the model generating these observations.

  2. 2.

    See Sect. 4.6 for a brief discussion on the application of maximum likelihood estimation to dynamical systems, for which the independence assumption does not hold.

  3. 3.

    \(\mu _{x}\) and \(\varphi _{x,\theta }(\cdot )\) only need to satisfy the condition (4.34).

  4. 4.

    The notion of adaptive estimation originated in [Stein, 1956]; one can refer to Beran [1974], Stone [1975], and Bickel et al. [1993] for the main steps in the developments.

  5. 5.

    That is, \(\sqrt{N}(\hat{\theta }_{1}^{N} -\bar{\theta })\) is bounded in probability; see page 33.

  6. 6.

    Typically, this implies that unknown initial values have been replaced by zero in the dynamical systems; the ML method is then called conditional ML, where conditional refers to this choice of initial values.

  7. 7.

    Many methods exist, they receive different names (recursive ML, recursive pseudo-linear regression, recursive generalized LS, extended LS…) depending on the type of model to which they are applied and on the type of approximations used in the implementation of the Newton step.

References

  • Barndorff-Nielsen, O. (1978). Information and Exponential Families in Statistical Theory. Chichester: Wiley.

    MATH  Google Scholar 

  • Beran, R. (1974). Asymptotically efficient rank estimates in location models. Ann. Statist. 2, 63–74.

    Article  MathSciNet  MATH  Google Scholar 

  • Bickel, P. (1982). On adaptive estimation. Ann. Statist. 10, 647–671.

    Article  MathSciNet  MATH  Google Scholar 

  • Bickel, P., C. Klassen, Y. Ritov, and J. Wellner (1993). Efficient and Adaptive Estimation for Semiparametric Models. Baltimore: Johns Hopkins Univ. Press.

    MATH  Google Scholar 

  • Bierens, H. (1994). Topics in Advanced Econometrics. Cambridge: Cambridge Univ. Press.

    Book  MATH  Google Scholar 

  • Caines, P. (1988). Linear Stochastic Systems. New York: Wiley.

    MATH  Google Scholar 

  • Cox, D. and D. Hinkley (1974). Theoretical Statistics. London: Chapman & Hall.

    MATH  Google Scholar 

  • del Pino, G. (1989). The unifying role of iterative generalized least squares in statistical algorithms (with discussion). Statist. Sci. 4(4), 394–408.

    Article  MathSciNet  MATH  Google Scholar 

  • Downing, D., V. Fedorov, and S. Leonov (2001). Extracting information from the variance function: optimal design. In A. Atkinson, P. Hackl, and W. Müller (Eds.), mODa’6 – Advances in Model–Oriented Design and Analysis, Proc. 6th Int. Workshop, Puchberg/Schneberg (Austria), pp. 45–52. Heidelberg: Physica Verlag.

    Google Scholar 

  • Goodwin, G. and R. Payne (1977). Dynamic System Identification: Experiment Design and Data Analysis. New York: Academic Press.

    MATH  Google Scholar 

  • Gorman, J. and A. Hero (1990). Lower bounds for parametric estimation with constraints. IEEE Trans. Information Theory 26, 1285–1301.

    Article  MathSciNet  Google Scholar 

  • Green, P. (1984). Iteratively reweighted least squares for maximum likelihood estimation, and some robust and resistant alternatives (with discussion). J. Roy. Statist. Soc. B-46(2), 149–192.

    Google Scholar 

  • Heyde, C. (1997). Quasi-likelihood and its Application. A General Approach to Optimal Parameter Estimation. New York: Springer.

    Book  MATH  Google Scholar 

  • Huber, P. (1981). Robust Statistics. New York: John Wiley.

    Book  MATH  Google Scholar 

  • Ibragimov, I. and R. Has’minskii (1981). Statistical Estimation. Asymptotic Theory. Heidelberg: Springer.

    Google Scholar 

  • Jørgensen, B. (1997). Exponential dispersion models. J. Roy. Statist. Soc. B49, 127–167.

    Google Scholar 

  • Le Cam, L. (1953). On some asymptotic properties of maximum likelihood estimates and related Bayes’ estimates. Univ. California Pub. in Stat. 1, 277–230.

    Google Scholar 

  • Le Cam, L. (1960). Local asymptotically normal families of distributions. Univ. California Pub. in Stat. 3, 37–98.

    Google Scholar 

  • Lehmann, E. and G. Casella (1998). Theory of Point Estimation. Heidelberg: Springer.

    MATH  Google Scholar 

  • Liang, K.-Y. and S. Zeger (1995). Inference based on estimating functions in the presence of nuisance parameters. Statist. Sci. 10(2), 158–173.

    Article  MathSciNet  MATH  Google Scholar 

  • Ljung, L. (1987). System Identification, Theory for the User. Englewood Cliffs: Prentice Hall.

    MATH  Google Scholar 

  • Manski, C. (1984). Adaptive estimation of nonlinear regression models. Econometric Rev. 3(2), 145–194.

    Article  MathSciNet  MATH  Google Scholar 

  • McCullagh, P. and J. Nelder (1989). Generalized Linear Models. London: Chapman & Hall. [2nd ed.].

    Google Scholar 

  • Pázman, A. (2002b). Results on nonlinear least squares estimators under nonlinear equality constraints. J. Stat. Plann. Inference 103, 401–420.

    Article  MATH  Google Scholar 

  • Söderström, T. and P. Stoica (1981). Comparison of some instrumental variable methods—consistency and accuracy aspects. Automatica 17(1), 101–115.

    Article  MathSciNet  MATH  Google Scholar 

  • Söderström, T. and P. Stoica (1983). Instrumental Variable Methods for System Identification. New York: Springer.

    Book  MATH  Google Scholar 

  • Söderström, T. and P. Stoica (1989). System Identification. New York: Prentice Hall.

    MATH  Google Scholar 

  • Stein, C. (1956). Efficient nonparametric testing and estimation. In Proc. 3rd Berkeley Symp. Math. Stat. Prob., Volume 1, pp. 187–196. Berkeley: Univ. of California Press.

    Google Scholar 

  • Stoica, P. (1998). On the Cramér-Rao bound under parametric constraints. IEEE Signal Proc. Lett. 5(7), 177–179.

    Article  Google Scholar 

  • Stoica, P. (2001). Parameter estimation problems with singular information matrices. IEEE Trans. Signal Proc. 49, 87–90.

    Article  MathSciNet  Google Scholar 

  • Stone, C. (1975). Adaptive maximum likelihood estimators of a location parameter. Ann. Statist. 3(2), 267–284.

    Article  MathSciNet  MATH  Google Scholar 

  • van de Geer, S. (2000). Empirical Processes in M-estimation. Cambridge: Cambridge Univ. Press.

    Google Scholar 

  • van der Vaart, A. (1998). Asymptotic Statistics. Cambridge: Cambridge Univ. Press.

    Book  MATH  Google Scholar 

  • van der Vaart, A. (2002). The statistical work of Lucien Le Cam. Ann. Statist. 30(3), 631–682.

    Article  MathSciNet  MATH  Google Scholar 

  • Walter, E. and L. Pronzato (1997). Identification of Parametric Models from Experimental Data. Heidelberg: Springer.

    MATH  Google Scholar 

  • Zarrop, M. (1979). Optimal Experiment Design for Dynamic System Identification. Heidelberg: Springer.

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

Pronzato, L., Pázman, A. (2013). Asymptotic Properties of M, ML, and Maximum A Posteriori Estimators. In: Design of Experiments in Nonlinear Models. Lecture Notes in Statistics, vol 212. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-6363-4_4

Download citation

Publish with us

Policies and ethics