Advertisement

Some Aspects of Statistical Inference in a Markovian and Mixing Framework

  • George G. Roussas
Part of the International Series in Operations Research & Management Science book series (ISOR, volume 46)

Abstract

This paper is a contribution to a special volume in memory of our departed colleague, Sidney Yakowitz, of the University of Arizona, Tucson.

The material discussed is taken primarily from existing literature in Markovian and mixing processes. Emphasis is given to the statistical inference aspects of such processes. In the Markovian case, both parametric and nonparametric inferences are considered. In the parametric component, the classical approach is used, whereupon the maximum likelihood estimate and the likelihood ratio function are the main tools. Also, methodology is employed based on the concept of contiguity and related results. In the nonparametric approach, the entities of fundamental importance are unconditional and conditional distribution functions, probability density functions, conditional expectations, and quantiles. Asymp-totic optimal properties are stated for the proposed estimates. In the mixing context, three modes of mixing are entertained but only one, the strong mixing case, is pursued to a considerable extent. Here the approach is exclusively nonparametric. As in the Markovian case, entities estimated are distribution fu nctions, probability density functions and their derivatives, hazard rates, and regression functions. Basic asymptotic optimal properties of the proposed estimates are stated, and precise references are provided. Estimation is proceeded by a discussion of probabilistic results, necessary for statistical inference.

It is hoped that this brief and selected review of literature on statistical inference in Markovian and mixing stochastic processes will serve as an introduction to this area of research for those who entertain such an interest.

The reason for selecting this particular area of research for a review is that a substantial part of Sidney’s own contributions have been in this area.

Keywords

Approximate exponential asymptotic normality consistency (weak, strong, in quadratic mean, uniform, with rates) contiguity differentiability in quadratic mean design (fixed, stochastic) distribution function estimate (maximum likelihood, maximum probability, nonparametric, of a distribution function, of a parameter, of a probability density function, of a survival function of derivatives, of hazard rate, parametric, recursive) likelihood ratio test local asymptotic normality Markov processes mixing processes random number of random variables stopping times testing hypotheses 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ahmad, I. A. and P. E. Lin. (1984). Fitting a multiple regression function. Journal of Statistical Planning and Inference9, 163–176.MathSciNetzbMATHGoogle Scholar
  2. Akritas, M.G. and G. G. Roussas. (1979). Asymptotic expansion of the log-likelihood function based on stopping times defined on Markov processes. Annals of the Institute of Statistical Mathematics31A, 21–38.zbMATHGoogle Scholar
  3. Akritas, M.G., M. L. Puri, and G. G. Roussas. (1979). Sample size, parameter rates and contiguity — the i.i.d. case. Communication in Statistics — Theoretical MethodsA8, 71–83.MathSciNetzbMATHGoogle Scholar
  4. Akritas, M.G. and G. G. Roussas. (1980). Asymptotic inference in continuous time semi-Markov processes. Scandinavian Journal of Statistics7, 73–79.MathSciNetzbMATHGoogle Scholar
  5. Athreya, K.B. and S. G. Pantula. (1986). Mixing properties of Harris chains and autoregressive processes. Journal of Applied Probability23, 880–892.MathSciNetzbMATHGoogle Scholar
  6. Bahadur, R.R. (1964). On Fisher’s bound for asymptotic variances. Annals of Mathematical Statistics35, 1545–1552.MathSciNetzbMATHGoogle Scholar
  7. Basawa, I.V. and B. L. S. Prakasa Rao. (1980). Statistical Inference for Stochastic Processes. Academic Press, New York.zbMATHGoogle Scholar
  8. Bhattacharya, P.K. (1967). Estimation of probability density and its derivatives. Sankhyā Series A29, 373–382.MathSciNetzbMATHGoogle Scholar
  9. Billingsley, P. (1961a). Statistical Inference for Markov Processes. University of Chicago Press, Chicago.zbMATHGoogle Scholar
  10. Billingsley, P. (1961b). The Lindeberg-Lévy theorem for martingales. Proceedings of the American Mathematical Society 12, 788–792.MathSciNetzbMATHGoogle Scholar
  11. Bosq, D. (1996). Nonparametric Statistics for Stochastic Processes. Springer-Verlag, Berlin.zbMATHGoogle Scholar
  12. Bradley, R.C. (1981). Central limit theorem under weak dependence. Journal of Multivariate Analysis11, 1–16.MathSciNetzbMATHGoogle Scholar
  13. Bradley, R.C. (1983). Asymptotic normality of some kernel-type estimators of probability density. Statistics and Probability Letters1, 295–300.MathSciNetzbMATHGoogle Scholar
  14. Bradley, R.C. (1985). Basic properties of strong mixing conditions. In: Dependence in Probability and Statistics, E. Eberlein and M.S. Taqqu (Eds.) 165–192, Birkhäuser, Boston.Google Scholar
  15. Burman, P. (1991). Regression function estimation from dependent observations. Journal of Multivariate Analysis36, 263–279.MathSciNetzbMATHGoogle Scholar
  16. Cai, Z. and G. G. Roussas. (1992). Uniform strong estimation under α-mixing, with rates. Statistics and Probability Letters15, 47–55.MathSciNetzbMATHGoogle Scholar
  17. Castellana, J.V. and M. R. Leadbetter. (1986). On the smoothed probability density estimation for stationary processes. Stochastic Processes and their Applications21, 179–193.MathSciNetzbMATHGoogle Scholar
  18. Cheng, K.F. and P. E. Lin. (1981). Nonparametric estimation of a regression function. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete57, 223–233.MathSciNetzbMATHGoogle Scholar
  19. Davydov, Y.A. (1973). Mixing conditions for Markov chains. Theory of Probability and its Applications. 18, 312–328.MathSciNetzbMATHGoogle Scholar
  20. Denny, J., C. Kisiel, and S. Yakowitz. (1974). Statistical inference on stream flow processes with Markovian characteristics. Water Resources Research10, 947–954.Google Scholar
  21. Devroye, L, and T. J. Wagner. (1980). On the L1 convergence of kernel estimators of regression function with applications in discrimination. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete51, 15–25.MathSciNetzbMATHGoogle Scholar
  22. Devroye, L. and L. Györfi. (1985). Nonparametric Density Estimation: The L1View. John Wiley and Sons, Toronto.zbMATHGoogle Scholar
  23. Devroye, L. (1987). A Course in Density Estimation. Birkhäuser, Boston.zbMATHGoogle Scholar
  24. Doob, J.L. (1953). Stochastic Processes. Wiley, New York.zbMATHGoogle Scholar
  25. Doukhan, P. (1994). Mixing: Properties and examples. Lecture Notes in Statistics No. 85, Springer-Verlag, New York.zbMATHGoogle Scholar
  26. Eubank, R. (1988). Spline Smoothing and Nonparametric Regression. Marcel-Dekker, New York.zbMATHGoogle Scholar
  27. Föles, A. (1974). Density estimation for dependent samples. Studia Scientiarum Mathematicarum Hungarica. 9, 443–452.MathSciNetGoogle Scholar
  28. Gani, J., S. Yakowitz, and M. Blount. (1997). The spread and quarantine of HIV infection in a prison system. SIAM Journal of Applied Mathematics57,1510–1530.MathSciNetzbMATHGoogle Scholar
  29. Gasser, T. and H.-G. Müller. (1979). Kernel estimation of regression function. In: Smoothing Techniques for Curve Estimation Lecture Notes in Mathematics. 757, 23–68. Springer-Verlag, Berlin.Google Scholar
  30. Georgiev, A.A. (1984a). Kernel estimates of functions and their derivatives with applications. Statistics and Probability Letters2, 45–50.MathSciNetzbMATHGoogle Scholar
  31. Georgiev, A.A. (1984b). Speed of convergence in nonparametric kernel estimation of a regression function and its derivatives. Annals of the Institute of Statistical Mathematics36, 455–462.MathSciNetzbMATHGoogle Scholar
  32. Georgiev, A.A. (1988). Consistent nonparametric multiple regression: The fixed design case. Journal of Multivariate Analysis25, 100–110.MathSciNetzbMATHGoogle Scholar
  33. Gorodetskii, V.V. (1977). On the strong mixing property for linear sequences. Theory of Probability and its Applications22, 411–413.Google Scholar
  34. Gyöfi, L. (1981). Strongly consistent density estimate from ergodic samples. Journal of Multivariate Analysis11, 81–84.MathSciNetGoogle Scholar
  35. Györfi, L., Härdle, W., Sarda, P. and Vieu, P. (1989). Nonparametric Curve Estimation from Time Series. Springer-Verlag, Berlin.zbMATHGoogle Scholar
  36. Györfi, L. and E. Masry. (1990). The L1 and L2 strong consistency of recursive kernel density estimation from dependent samples. IEEE Transactions on Information Theory36, 531–539.MathSciNetzbMATHGoogle Scholar
  37. Györfi, L., G. LMorvai, and S. Yakowitz. (1998). Limits to consistent on-line forecasting for ergodic time series. IEEE Transactions on Information Theory44, 886–892.MathSciNetzbMATHGoogle Scholar
  38. Hájek, J. (1970). A characterzation of limiting distributions of regular estimates. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete14, 323–330.zbMATHGoogle Scholar
  39. Hall, P. and C.C. Heyde. (1980). Martingale Limit Theory and Its Applications. Academic Press, New York.zbMATHGoogle Scholar
  40. Härdle, W. (1990). Applied Nonparametric Regression. Cambridge University Press, Cambridge.zbMATHGoogle Scholar
  41. Heyde C. and S. Yakowitz. (2001). Unpublished work.Google Scholar
  42. Ibragimov, I.A. (1959). Some limit theorems for strict sense stationary stochastic processes. Doklady Akademii Nauk SSSR125, 711–714.MathSciNetzbMATHGoogle Scholar
  43. Ibragimov, I.A. (1963). A central limit theorem for a class of dependent random variables. Theory of Probability and its Applications8, 83–94.MathSciNetzbMATHGoogle Scholar
  44. Ibragimov, I.A. and Yu. V. Linnik. (1971). Independent and Stationary Sequences of Random Variables. Wolters-Noordhoff Publishing, Groningen, The Netherlands.zbMATHGoogle Scholar
  45. Inagaki, N. (1970). On the limiting distribution of a sequence of estimators with uniformity property. Annals of the Institute of Statistical Mathematics22, 1–13.MathSciNetzbMATHGoogle Scholar
  46. Johnson, R.A. and G.G. Roussas. (1969). Asymptotically most powerful tests in Markov processes. Annals of Mathematical Statistics40, 1207–1215.MathSciNetzbMATHGoogle Scholar
  47. Johnson, R.A. and G.G. Roussas. (1970). Asymptotically optimal test in Markov processes. Annals of Mathematical Statistics41, 918–938.MathSciNetzbMATHGoogle Scholar
  48. Johnson, R.A. and G.G. Roussas. (1972). Applications of contiguity to multiparameter hypothesis testing. Proceedings of the 6th Berkeley Symposium of Mathematical Statistics and Probability1, 195–226.zbMATHGoogle Scholar
  49. Kesten, H. and G.L. O’Brien. (1976). Examples of mixing sequences. Duke Mathematical Journal43, 405–415.MathSciNetzbMATHGoogle Scholar
  50. Lai, T.L. and S. Yakowitz. (1995). Machine learning and nonparametric bandit theory. IEEE Transactions on Automatic Control40, 1199–1209.MathSciNetzbMATHGoogle Scholar
  51. Le Cam, L. (1960). Locally asymptotically normal families of distributions. University of California Publication of Statistics3, 37–98.Google Scholar
  52. Le Cam, L. (1966). Likelihood functions for large numbers of independent observations. In: Research Papers in Statistics, Festschrift for J. Neyman, F.N. David (Ed.) 167–187. John Wiley and Sons, New York.Google Scholar
  53. Le Cam, L. (1986). Asymptotic Methods in Statistical Decision Theory. Springer-Verlag, New York.zbMATHGoogle Scholar
  54. Le Cam, L. and G. Yang. (2000). Asymptotics in Statistics: Some Basic Concepts, 2nd ed. Springer-Verlag, New York.zbMATHGoogle Scholar
  55. Lind, B. and G.G. Roussas. (1977). Camér-type conditions and quadratic mean differentiability. Annals of the Institute of Statistical Mathematics29, 189–201.MathSciNetzbMATHGoogle Scholar
  56. Longnecker, M. and R.J. Serfling. (1978). Moment inequalities for Sn under general stationary mixing sequences. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete43, 1–21.MathSciNetzbMATHGoogle Scholar
  57. Mack, Y.P. and B.W. Silverman. (1982). Weak and strong uniform consistency of kernel regression estimates. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete61, 405–415.MathSciNetzbMATHGoogle Scholar
  58. Masry, E. (1983). Probability density estimation form sampled data. IEEE Transactions on Information Theory29, 696–709.MathSciNetzbMATHGoogle Scholar
  59. Masry, E. (1986). Recursive probability density estimation for weakly dependent stationary processes. IEEE Transactions on Information Theory II32, 254–267.MathSciNetzbMATHGoogle Scholar
  60. Masry, E. and L. Györfi. (1987). Strong consistency and rates for recursive probability density estimators of stationary processes. Journal of Multivariate Analysis22, 79–93.MathSciNetzbMATHGoogle Scholar
  61. Masry, E. (1989). Nonparametric estimation of conditional probability densities and expectations of stochastic processes: Strong consistency and rates. Stochastic Processes and their Applications32, 109–127.MathSciNetzbMATHGoogle Scholar
  62. Masry, E. (1996). Multivariate local polynomial regression estimation for time series: Uniform strong consistency and rates. Journal of Time Series Analysis17,571–599.MathSciNetzbMATHGoogle Scholar
  63. Masry, E. and J. Fan. (1997). Local polynomial estimation of regression functions for mixing processes. Scandinavian Journal of Statistics24, 165–179.MathSciNetzbMATHGoogle Scholar
  64. Morvai, G., S. Yakowitz, and L. Gyöfi. (1996). Nonparametric inference for ergodic, stationary time series. Annals of Statistics24, 370–379.MathSciNetzbMATHGoogle Scholar
  65. Morvai, G., S. Yakowitz, and P. Algoet. (1997). Weakly convergent nonparametric forecasting for stationary time series. IEEE Transactions on Information Theory43, 483–498.MathSciNetzbMATHGoogle Scholar
  66. Müller, H.-G. (1988). Nonparametric Regression Analysis of Longitudinal Data. Lecture Notes in Statistics No. 46, Springer-Verlag, Heidelberg.zbMATHGoogle Scholar
  67. Nadaraya, E.A. (1964a). On estimating regression. Theory of Probability and its Applications9, 141–142.zbMATHGoogle Scholar
  68. Nadaraya, E.A. (1964b). Some new estimates for distribution functions. Theory of Probability and its Applications9, 491–500.MathSciNetGoogle Scholar
  69. Nadaraya, E.A. (1970). Remarks on nonparametric estimates for density function and regression curves. Theory of Probability and its Applications15, 134–137.zbMATHGoogle Scholar
  70. Nguyen, H.T. (1981). Asymptotic normality of recursive density estimators in Markov processes. Publication of the Institute of Statistics, University of Paris26, 73–93.zbMATHGoogle Scholar
  71. Nguyen, H.T. (1984). Recursive nonparametric estimation in stationary Markov processes. Publication of the Institute of Statistics University of Paris29, 65–84.MathSciNetzbMATHGoogle Scholar
  72. Parzen, E. (1962). On estimation of a probability density function and mode. Annals of Mathematical Statistics23, 1065–1076.MathSciNetzbMATHGoogle Scholar
  73. Peligrad, M. (1985). Convergence rates of the strong law for stationary mixing sequences. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete70, 307–314.MathSciNetzbMATHGoogle Scholar
  74. Pham, T.D. and L.T. Tran. (1985). Some strong mixing properties of time series models. Stochastic Processes and their Applications19, 297–303.MathSciNetzbMATHGoogle Scholar
  75. Pham, T.D. (1986). The mixing property of bilinear and generalized random coefficient autoregressive models. Stochastic Processes and their Applications23, 291–300.MathSciNetzbMATHGoogle Scholar
  76. Philipp, W. (1969). The remainder in the central limit theorem for mixing stochastic processes. Annals of Mathematical Statistics40, 601–609.MathSciNetzbMATHGoogle Scholar
  77. Prakasa Rao, B.L.S. (1977). Berry-Esseen bound for density estimators of stationary Markov processes. Bulletin of Mathematical Society17, 15–21.MathSciNetzbMATHGoogle Scholar
  78. Prakasa Rao, B.L.S. (1983). Nonparametric Functional Estimation. Academic Press, New York.zbMATHGoogle Scholar
  79. Priestley, M.B. and M.T. Chao. (1972). Nonparametric functions fitting. Journal of the Royal Statistical Society SeriesB 34, 385–392.zbMATHGoogle Scholar
  80. Robinson, P.M. (1983). Nonparametric estimators for time series. Journal of Time Series Analysis4, 185–207.MathSciNetzbMATHGoogle Scholar
  81. Robinson, P.M. (1986). On the consistency and finite-sample properties of nonparametric kernel time series regression, autoregression and density estimators. Annals of the Institute of Statistical Mathematics38, Part A, 539–549.MathSciNetzbMATHGoogle Scholar
  82. Rosenblatt, M. (1956a). Remarks on some nonparametric estimates of a density function. Annals of Mathematical Statistics27, 823–835.MathSciNetzbMATHGoogle Scholar
  83. Rosenblatt, M. (1956b). A central limit theorem and a strong mixing condition. Proceedings of the National Academy of Sciences, U.S.A. 42, 43–47.MathSciNetzbMATHGoogle Scholar
  84. Rosenblatt, M. (1969). Conditional probability density and regression estimators. In: Multivariate Analysis II, P.R. Krishnaiah (Ed.) 25–31. Academic Press, New York.Google Scholar
  85. Rosenblatt, M. (1970). Density estimates and Markov sequences. In: Nonparametric Techniques in Statistical Inference, M. Puri (Ed.). Cambridge University Press, Cambridge.Google Scholar
  86. Rosenblatt, M. (1971). Curve estimates. Annals of Mathematical Statistics42, 1815–1842.MathSciNetzbMATHGoogle Scholar
  87. Rosenblatt, M. (1985). Stationary Sequences and Random Fields. Birkhäuer, Boston.zbMATHGoogle Scholar
  88. Roussas, G.G. (1965a). Asymptotic inference in Markov processes. Annals of Mathematical Statistics36, 978–992.MathSciNetzbMATHGoogle Scholar
  89. Roussas, G.G. (1965b). Extension to Markov processes of a result by A. Wald about the consistency of the maximum likelihood estimate. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete4, 69–73.zbMATHGoogle Scholar
  90. Roussas, G.G. (1968a). Asymptotic normality of the maximum likelihood estimate in Markov processes. Metrika14, 62–70.MathSciNetzbMATHGoogle Scholar
  91. Roussas, G.G. (1968b). Some applications of the asymptotic distribution of the likelihood functions to the asymptotic efficiency of estimates. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete10, 252–260.MathSciNetzbMATHGoogle Scholar
  92. Roussas, G.G. (1969a). Nonparametric estimation in Markov processes. Annals of the Institute of Statistical Mathematics21, 73–78.MathSciNetzbMATHGoogle Scholar
  93. Roussas, G.G. (1969b). Nonparametric estimation of the transition distribution function of a Markov process. Annals of Mathematical Statistics40, 1386–1400.MathSciNetzbMATHGoogle Scholar
  94. Roussas, G.G. (1972). Contiguity of Probability Measures: Some Applications in Statistics. Cambridge University Press, Cambridge.zbMATHGoogle Scholar
  95. Roussas, G.G. and A. Soms. (1973). On the exponential approximation of a family of probability measures and a representation theorem of Hájek-Inagaki. Annals of the Institute of Statistical Mathematics25, 27–39.MathSciNetzbMATHGoogle Scholar
  96. Roussas, G.G. (1979). Asymptotic distribution of the log-likelihood function for stochastic processes. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete47, 31–46.MathSciNetzbMATHGoogle Scholar
  97. Roussas, G.G. and D. Ioannides. (1987). Moment inequalities for mixing sequences of random variables. Stochastic Analysis and Applications5, 61–120.MathSciNetzbMATHGoogle Scholar
  98. Roussas, G.G. (1988a). A moment inequality of Sn,kn for triangular arrays of random variables under mixing conditions, with applications. In: Statistical Theory and Data Analysis II, K. Matusita (Ed.) 273–292. North-Holland, Amsterdam.Google Scholar
  99. Roussas, G.G. (1988b). Nonparametric estimation in mixing sequences of random variables. Journal of Statistical Planning and Inference18, 135–149.MathSciNetzbMATHGoogle Scholar
  100. Roussas, G.G. and D. Ioannides. (1988). Probability bounds for sums of triangular arrays of random variables under mixing conditions. In: Statistical Theory and Data Analysis II, K. Matusita (Ed.) 293–308. North-Holland, Amsterdam.Google Scholar
  101. Roussas, G.G. (1989a). Consistent regression estimation with fixed design points under dependence conditions. Statistics and Probability Letters8, 41–50.MathSciNetzbMATHGoogle Scholar
  102. Roussas, G.G. (1989b). Hazard rate estimation under dependence conditions. Journal of Statistical Planning and Inference22, 81–93.MathSciNetzbMATHGoogle Scholar
  103. Roussas, G.G. (1989c). Some asymptotic properties of an estimate of the survival function under dependence conditions. Statistics and Probability Letters8, 235–243.MathSciNetzbMATHGoogle Scholar
  104. Roussas, G.G. (1990a). Asymptotic normality of the kernel estimate under dependence conditions: Application to hazard rate. Journal of Statistical Planning and Inference25, 81–104.MathSciNetzbMATHGoogle Scholar
  105. Roussas, G.G. (1990b). Nonparametric regression estimation and mixing conditions. Stochastic Processes and their Applications36, 107–116.MathSciNetzbMATHGoogle Scholar
  106. Roussas, G.G. (1991a). Estimation of transition distribution function and its quantiles in Markov processes: Strong consistency and asymptotic normality. In: Nonparametric Functional Estimation and Related Topics, G.G. Roussas (Ed.) 443–462. Kluwer Academic Publishers, The Netherlands.zbMATHGoogle Scholar
  107. Roussas, G.G. (1991b). Recursive estimation of the transition distribution function of a Markov process: Asymptotic normality. Statistics and Probability Letters11, 435–447.MathSciNetzbMATHGoogle Scholar
  108. Roussas, G.G. (1992). Exact rates of almost sure convergence of a recursive kernel estimate of a probability density function: Application to regression and hazard rate estimation. Journal of Nonparametric Statistics1, 171–195.MathSciNetzbMATHGoogle Scholar
  109. Roussas, G.G. and L.T. Tran. (1992a). Joint asymptotic normality of kernel estimates under dependence, with applications to hazard rate. Journal of Nonparametric Statistics1, 335–355.MathSciNetzbMATHGoogle Scholar
  110. Roussas, G.G. and L.T. Tran. (1992b). Asymptotic normality of the recursive kernel regression estimate under dependence conditions. Annals of Statistics20, 98–120.MathSciNetzbMATHGoogle Scholar
  111. Roussas, G.G., L.T. Tran, and D.A. Ioannides. (1992). Fixed design regression for time series: Asymptotic normality. Journal of Multivariate Analysis40, 262–291.MathSciNetzbMATHGoogle Scholar
  112. Roussas, G.G. (1996). Efficient estimation of a smooth distribution function under α-mixing. In: Research Developments in Probability and Statistics, Festschrift in honor of Madan L. Puri, E. Brunner and M Denker (Eds.), 205–217. VSP, The Netherlands.Google Scholar
  113. Roussas, G.G. and Y.G. Yatracos. (1996). Minimum distance regression type estimates with rates under weak dependence. Annals of the Institute of Statistical Mathematics48, 267–281.MathSciNetzbMATHGoogle Scholar
  114. Roussas, G.G. and Y.G. Yatracos. (1997). Minimum distance estimates with rates under ϕ mixing. In: Research Papers in Probability and Statistics, Festschrift for Lucien Le Cam, D. Pollard, E. Torgersen and G.L. Yang (Eds.) 337–344. Springer-Verlag, New York.Google Scholar
  115. Roussas, G.G. and D. Bhattacharya. (1999a). Asymptotic behavior of the log-likelihood function in stochastic processes when based on a random number of random variables. In: Semi-Markov Models and Applications, J. Janssen and N. Limnios (Eds.) 119–147. Kluwer Academic Publishers, The Netherlands.Google Scholar
  116. Roussas, G.G. and D. Bhattacharya. (1999b). Some asymptotic results and exponential approximation in semi-Markov models. In: Semi-Markov Models and Applications, J. Janssen and N. Limnios (Eds.) 149–166. Kluwer Academic Publishers, The Netherlands.zbMATHGoogle Scholar
  117. Roussas, G.G. (2000). Contiguity of Probability Measures. Encyclopaedia of Mathematics, Supplement II. Kluwer Academic Publishers, pages 129–130, The Netherlands.Google Scholar
  118. Rüschendorf, L. (1987). Consistency of estimates for multivariate density functions and for the mode. Sankhyā Series A39, 243–250.zbMATHGoogle Scholar
  119. Schmetterer, L. (1966). On the asymptotic efficiency of estimates. In: Research Papers in Statistics, Festschrift for J. Neyman; F.N. David (Ed.) 301–317. John Wiley and Sons, New York.Google Scholar
  120. Schuster, E.F. (1969). Estimation of a probability density function and its derivatives. Annals of Mathematical Statistics40, 1187–1195.MathSciNetzbMATHGoogle Scholar
  121. Schuster, E.F. (1972). Joint asymptotic distribution of the estimated regression function at a finite number of distinct points. Annals of Mathematical Statistics43, 84–88.MathSciNetzbMATHGoogle Scholar
  122. Scott, D.W. (1992). Multivariate Density Estimation. Wiley, New York.zbMATHGoogle Scholar
  123. Silverman, B.W. (1986). Density Estimation for Statistics and Data Analysis. Chapman and Hall, London.zbMATHGoogle Scholar
  124. Stamatelos, G.D. (1976). Asymptotic distribution of the log-likelihood function for stochastic processes. Some examples. Bulletin of Mathematical Society. Gréce 17, 92–116.MathSciNetzbMATHGoogle Scholar
  125. Takahata, H. (1993). On the rates in the central limit theorem for weakly dependent random variables. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete62, 477–480.Google Scholar
  126. Tran, L.T. (1990). Kernel density estimation under dependence. Statistics and Probability Letters10, 193–201.MathSciNetzbMATHGoogle Scholar
  127. Tran, L.T. and S. Yakowitz. (1993). Nearest neighbor estimators for random fields. Journal of Multivariate Analysis44, 23–46.MathSciNetzbMATHGoogle Scholar
  128. Tran, L.T., G.G. Roussas, S. Yakowitz, and V. Truong. (1996). Fixed-design regression for linear time series. Annals of Statistics24, 975–991.MathSciNetzbMATHGoogle Scholar
  129. Wald, A. (1941). Asymptotically most powerful tests of statistical hypotheses. Annals of Mathematical Statistics12, 1–19.MathSciNetzbMATHGoogle Scholar
  130. Wald, A. (1943). Tests of statistical hypotheses concerning several parameters when the number of observations is large. Transactions of the American Mathematical Society54, 426–482.MathSciNetzbMATHGoogle Scholar
  131. Watson, G.S. (1964). Smooth regression analysis. Sankhyā Series A26, 359–372.MathSciNetzbMATHGoogle Scholar
  132. Watson, G.S. and M.R. Leadbetter. (1964a). Hazard Analysis. I. Biometrika51, 175–184.MathSciNetzbMATHGoogle Scholar
  133. Watson, G.S. and M.R. Leadbetter. (1964b). Hazard Analysis. Sankhyā Series A26, 110–116.MathSciNetzbMATHGoogle Scholar
  134. Wegman, E.J. and H.I. Davis. (1979). Remarks on some recursive estimators of a probability density. Annals of Statistics. 7, 316–327.MathSciNetzbMATHGoogle Scholar
  135. Weiss, L. and J. Wolfowitz. (1966). Generalized maximum likelihood estimators. Theory of Probability and its Applications11, 58–81.MathSciNetzbMATHGoogle Scholar
  136. Weiss, L. and J. Wolfowitz. (1967). Maximum probability estimators. Annals of the Institute of Statistical Mathematics19, 193–206.MathSciNetzbMATHGoogle Scholar
  137. Weiss, L. and J. Wolfowitz. (1970). Maximum probability estimators and asymptotic sufficiency. Annals of the Institute of Statistical Mathematics22, 225–244.MathSciNetzbMATHGoogle Scholar
  138. Weiss, L. and J. Wolfowitz. (1974). Maximum Probability Estimators and Related Topics. Lecture Notes in Mathematics424, Springer-Verlag, Berlin-Heidelberg-New York.Google Scholar
  139. Withers, C.S. (1981a). Conditions for linear processes to be strong mixing. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete57, 477–480.MathSciNetzbMATHGoogle Scholar
  140. Withers, C.S. (1981b). Central limit theorems for dependent variables I. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete57, 509–534.MathSciNetzbMATHGoogle Scholar
  141. Wolfowitz, J. (1965). Asymptotic efficiency of the maximum likelihood estimator. Theory of Probability and its Applications10, 247–260.MathSciNetzbMATHGoogle Scholar
  142. Yakowitz, S. (1972). A statistical model for daily stream flow records with applications to the Rillito river. Proceedings, International Symposium on Uncertainties in Hydrologic and Water Resources Systems, 273–283. University of Arizona, Tucson.Google Scholar
  143. Yakowitz, S. (1973). A stochastic model for daily river flows in an arid region. Water Resources Research9, 1271–1285.Google Scholar
  144. Yakowitz, S. and J. Denny. (1973). On the statistics of hydrologic time series. Proceedings, 17th Annual Meeting of the Arizona Academy of Sciences3, 146–163.Google Scholar
  145. Yakowitz, S. (1976a). Small sample hypothesis tests of Markov order, with applications to simulated and hydrologic chain. Journal of the American Statistical Association71, 132–136.MathSciNetzbMATHGoogle Scholar
  146. Yakowitz, S. (1976b). Model-free statistical methods for water table predictions. Water Resources Research12, 836–844.Google Scholar
  147. Yakowitz, S. (1977a). Statistical models and methods for rivers in the southwest. Proceedings, 21st Annual Meeting of the Arizona Academy of Sciences. Las Vegas.Google Scholar
  148. Yakowitz, S. (1977b). Computational Probability and Simulation. Addison-Wesley, Reading, Mass.zbMATHGoogle Scholar
  149. Yakowitz, S. (1979). Nonparametric estimation of Markov transition functions. Annals of Statistics7, 671–679.MathSciNetzbMATHGoogle Scholar
  150. Yakowitz, S. (1985). Nonparametric density estimation, prediction, and regression for Markov sequences. Journal of the American Statistical Association80, 215–221.MathSciNetzbMATHGoogle Scholar
  151. Yakowitz, S. (1987). Nearest-neighbor methods for time series analysis. Journal of Time Series Analysis8, 235–247.MathSciNetzbMATHGoogle Scholar
  152. Yakowitz, S. (1989). Nonparametric density and regression estimation for Markov sequences without mixing assumptions. Journal of Multivariate Analysis30, 124–136.MathSciNetzbMATHGoogle Scholar
  153. Yakowitz, S. and W. Lowe. (1991). Nonparametric bandit methods. Annals of Operation Research28, 297–312.MathSciNetzbMATHGoogle Scholar
  154. Yakowitz, S., T. Jayawardena, and S. Li. (1992). Theory for automatic learning under partially observed Markov-dependent noise. IEEE Transactions on Automatic Control37, 1316–1324.MathSciNetzbMATHGoogle Scholar
  155. Yakowitz, S. (1993). Nearest neighbor regression estimation for null-recurrent Markov time series. Stochastic Processes and their Applications48, 311–318.MathSciNetzbMATHGoogle Scholar
  156. Yamato, H. (1973). Uniform convergence of an estimator of a distribution function. Bulletin of Mathematical Society15, 69–78.MathSciNetzbMATHGoogle Scholar
  157. Yokoyama, R. (1980). Moment bounds for stationary mixing sequences. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete52, 45–57.MathSciNetzbMATHGoogle Scholar
  158. Yoshihara, K. (1978). Moment inequalities for mixing sequences. Kodai Mathematics Journal1, 316–328.MathSciNetzbMATHGoogle Scholar
  159. Yoshihara, K. (1993). Weakly Dependent Stochastic Sequences and Their Applications. Vol. II: Asymptotic Statistics based on Weakly Dependent Data. Sanseido Co. Ltd., Tokyo.zbMATHGoogle Scholar
  160. Yoshihara, K. (1994a). Weakly Dependent Stochastic Sequences and Their Applications. Vol. IV: Curve Estimation based on Weakly Dependent Data. Sanseido Co. Ltd., Tokyo.zbMATHGoogle Scholar
  161. Yoshihara, K. (1994b). Weakly Dependent Stochastic Sequences and Their Applications. Vol. V: Estimators based on Time Series. Sanseido Co, Ltd., Tokyo.zbMATHGoogle Scholar
  162. Yoshihara, K. (1995). Weakly Dependent Stochastic Sequences and Their Applications. Vol. VI: Statistical Inference based on Weakly Dependent Data. Sanseido, Co, Ltd., Tokyo.zbMATHGoogle Scholar

Copyright information

© Springer Science + Business Media, Inc. 2002

Authors and Affiliations

  • George G. Roussas
    • 1
  1. 1.University of CaliforniaDavis

Personalised recommendations