Journal of Statistical Theory and Practice

, Volume 7, Issue 3, pp 537–543 | Cite as

Fitting Poisson Time-Series Models Using Bivariate Mixture Transition Distributions

  • M. Y. HassanEmail author
  • M. Y. El-Bassiouni


Using bivariate mixture transition models (BMTD) in modeling marked points processes (MMP) needs clever reparameterization in an effort to incorporate lag information for capturing the structure of the process being modeled. Such reparameterizations may depend on domain knowledge as well as on considerations of computational stability and prediction capability, among others. A choice of a reasonable reparameterization is needed to build in the lag information, whether linear or nonlinear, and to obtain better results in terms of model stability and prediction capability. This article tackles this issue. Several stable choices of these reparameterizations for the bivariate continuous-discrete BMTD model proposed by Hassan and El-Bassiouni (2012) are considered and compared using a real data set on Internet network traffic. Results show that the inference based on BMTD models is not sensitive to the choice of the functional form as long as such functions involve some carefully chosen exponential forms of the lagged data.


Continuous-discrete bivariate distribution models EM algorithm Internet traffic Poisson time-series regression models 

AMS Subject Classification

62E15 62F10 62J02 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Dunis, C., and B. Zhou. 1998. Nonlinear modelling of high frequency financial time series. New York, John Wiley and Sons.Google Scholar
  2. Evangelatos, E., T. Rapsomanikis, N. Mitrou, and G. Stassinopoulos. 2000. A Burstlevel traffic analysis tool. Comput. Networks, 34, 3–22.CrossRefGoogle Scholar
  3. Everitt, B. S., and D. J. Hand. 1981. Finite Mixture Distributions. New York, Chapman & Hall.CrossRefGoogle Scholar
  4. Fraley, C., and A. Raftery. 1998. How many clusters? Comput. J., 41, 578–587.CrossRefGoogle Scholar
  5. Gassiat, E., and D. Dacunha-Castelle. 1997. Estimation of the number of components in a mixture. Bernoulli, 3, 279–299.MathSciNetCrossRefGoogle Scholar
  6. Hassan, M. Y., and M. Y. El-Bassiouni. 2012. Modeling Poisson marked point processes using bivariate mixture transition distributions. J. Stat. Comput. Simulation, 83, 1462–1474.Google Scholar
  7. Hassan, M. Y., and K.-S. Lii. 2006. Modeling marked point processes via bivariate mixture transition distribution models. J. Am. Stat. Assoc., 101, 1241–1252.MathSciNetCrossRefGoogle Scholar
  8. Le, N., R. Martin, and E. Raftery. 1996. Modeling flat stretches, bursts, and outliers in time series using mixture transition distribution models. J. Am. Stat. Assoc., 91, 1504–1515.MathSciNetzbMATHGoogle Scholar
  9. Louis, T. A. 1982. Finding the observed information matrix when using the EM algorithm. J. R. Stat. Soc., Ser. B, 44, 226–233.MathSciNetzbMATHGoogle Scholar
  10. Paxson, V., and S. Floyd. 1995. Wide-area traffic: The failure of Poisson modeling. IEEE/ACM Trans. Network., 3(3), 226–244.CrossRefGoogle Scholar
  11. Picard, F. 2007. An introduction to mixture models. Research report no. 7, Statistics for Systems Biology Group, University d’Evry, Evry, France.Google Scholar
  12. Sanchez, J., and Y. He. 2005. Internet data analysis for the undergraduate statistics curriculum. J. Stat. Educ., 13, 1–20.CrossRefGoogle Scholar
  13. Steele, R. J., and A. E. Raftery. 2009. Performance of Bayesian model selection criteria for Gaussian mixture models. Technical report no. 559, Department of Statistics, University of Washington, Seattle.Google Scholar
  14. Su, Y. V., C. Yang, and C. Lee. 2004. The analysis of packet loss prediction for Gilbert model with loss rate uplink. Information Process. Lett., 90, 155–159.MathSciNetCrossRefGoogle Scholar
  15. Titterington, D. M., F. M. Smith, and E. Markov. 1985. Statistical analysis of finite mixture distributions. New York, Wiley.Google Scholar
  16. Woodbury, M. 1971. Discussion of paper by Hartley and Hocking. Biometrics, 27, 808–817.Google Scholar
  17. Yakowitz, S. J., and J. D. Spragins. 1968. On the identifiability of finite mixtures. Ann. Math. Stat., 39, 209–214.MathSciNetCrossRefGoogle Scholar
  18. Yang, Z. R., and S. Chen. 1998. Robust maximum likelihood training of heteroscedastic probabilistic neural networks. Neural Networks, 11, 739–747.CrossRefGoogle Scholar

Copyright information

© Grace Scientific Publishing 2013

Authors and Affiliations

  1. 1.Department of StatisticsUAE UniversityAl AinUnited Arab Emirates

Personalised recommendations