Advertisement

Sankhya B

, Volume 77, Issue 2, pp 256–292 | Cite as

Assessing the Dependence Structure of the Components of Hybrid Time Series Processes Using Mutual Information

  • Apratim GuhaEmail author
Article
  • 62 Downloads

Abstract

Hybrid processes, which are multivariate time series with some components continuous valued time series and the rest discrete valued time series or point processes, often arise in studies of neurological systems. Assessment of the dependence structure among the components of hybrid processes are usually done by various linear methods which often prove inadequate. Mutual information (MI) is a useful extension of the correlation coefficient to study such structures. In this paper we consider the application of MI to study the dependence structure of bivariate stationary hybrid processes. We develop results on the asymptotic behaviour of the kernel density estimator based estimators of MI. However, because of issues with the behaviour of the kernel density estimators for finite sample size, we advocate the use of bootstrap based methods in determining the bias and standard error of such estimates. We perform some simulation studies to explore the finite sample behaviour of such MI estimates. We also develop MI-based tests to assess whether the components of the hybrid processes are independent and to compare the structure of multiple hybrid series. An application to a neuroscience data set is discussed.

Keywords

Hybrid time series Information theory Mutual information Neuroscience Nonparametric estimation Point processes 

AMS (2000) Subject Classification

Primary 62M10 Secondary 60G55 62G05 94A15 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Amjad A. M., Halliday D. M., Rosenberg J. R. and Conway B. A. (1997). An extended difference of coherence test for comparing and combining independent estimates-theory and application to the study of motor units and physiological tremor. Journal of Neuroscience Methods 73, 69–79.CrossRefGoogle Scholar
  2. Antos A. and Kontoyiannis Y. (2001). Convergence properties of functional estimates for discrete distributions. Random Structures & Algorithms 19, 163–193.CrossRefMathSciNetzbMATHGoogle Scholar
  3. Biswas A. and Guha A. (2009). Time series analysis of categorical data on infant sleep status using auto-mutual information. Journal of Statistical Planning and Inference 139, 3076–3087.CrossRefMathSciNetzbMATHGoogle Scholar
  4. Bosq D. (1996). Nonparametric statistics for stochastic processes. Springer-Verlag, New York.CrossRefzbMATHGoogle Scholar
  5. Bouezmarni T. and Scaillet O. (2005). Consistency of asymmetric kernel density estimators and smoothed histograms with application to income data. Econometric Theory 21, 390–412.CrossRefMathSciNetzbMATHGoogle Scholar
  6. Bradley R. (1983). Approximation theorems for strongly mixing random variables. Michigan Mathematical Journal 30, 69–81.CrossRefMathSciNetzbMATHGoogle Scholar
  7. Brillinger D. R. (2004). Some data analysis using mutual information. Brazilian Journal of Probability and Statistics 18, 163–183.MathSciNetzbMATHGoogle Scholar
  8. Brillinger D. R. and Guha A. (2007). Mutual information in the frequency domain. Journal of Statistical Planning and Inference 137, 1074–1086.CrossRefMathSciNetGoogle Scholar
  9. Cover T. M. and Thomas J. A. (1991). Elements of information theory. Wiley, New York.CrossRefzbMATHGoogle Scholar
  10. Dionisio A., Menezes R. and Mendes D. A. (2004). Mutual information: a measure of dependency for nonlinear time series. Physica A: Statistical Mechanics and its Applications 344, 326–329.CrossRefMathSciNetGoogle Scholar
  11. Dunn O. J. (1961). Multiple comparisons among means. Journal of the American Statistical Association 56, 52–64.CrossRefMathSciNetzbMATHGoogle Scholar
  12. Elble R. J. (1986). Physiologic and essential tremor. Neurology 36, 225–231.CrossRefGoogle Scholar
  13. Elble R. J. and Koller W. C. 1990 Tremor, Johns Hopkins UP, Baltimore.Google Scholar
  14. Fan Y. and Linton O. (2003). Some higher-order theory for a consistent non-parametric model specification test. Journal of Statistical Planning and Inference 109, 125–154.CrossRefMathSciNetzbMATHGoogle Scholar
  15. Fernandes M. and Neri B. (2010). Nonparametric entropy-based tests of independence between stochastic processes. Econometric Reviews 29, 276–306.CrossRefMathSciNetzbMATHGoogle Scholar
  16. Geweke J. (1981). A comparison of tests of the independence of two covariance-stationary time series. Journal of the American Statistical Association 76, 363–373.CrossRefMathSciNetzbMATHGoogle Scholar
  17. Granger C. and Lin J. L. (1994). Using the mutual information coefficient to identify lags in nonlinear models. Journal of Time Series Analysis 15, 371–384.CrossRefMathSciNetzbMATHGoogle Scholar
  18. Granger C. W., Maasoumi E. and Racine J. (2004). A dependence metric for possibly nonlinear processes. Journal of Time Series Analysis 25, 649–669.CrossRefMathSciNetzbMATHGoogle Scholar
  19. Guha A. 2005 Analysis of Dependence Structures of Hybrid Stochastic Processes Using Mutual Information, Ph.D. Thesis, University of California, Berkeley.Google Scholar
  20. Guha A. and Biswas A. (2008). An overview of modeling techniques for hybrid brain data. Statistica Sinica 18, 1311–1340.MathSciNetzbMATHGoogle Scholar
  21. Hall P. and Heyde C. C. (1980). Martingale Limit Theory and Its Application. Academic Press, San Fransisco.zbMATHGoogle Scholar
  22. Halliday D. M., Rosenberg J. R., Amjad A. M., Breeze P., Conway B. A. and Farmer S. F. (1995). A framework for the analysis of mixed time series/point process data-theory and application to the study ofphysiological tremor, single motor unit discharges and electromyograms. Progress in Biophysics and Molecular Biology 64, 237–278.CrossRefGoogle Scholar
  23. Halliday D. M., Conway B. A., Farmer S. F. and Rosenberg J. R. (1999). Load-Independent contributions from motor-unit synchronization to human physiological tremor. Journal of Neurophysiology 82, 664–675.Google Scholar
  24. Holm S. (1976). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6, 65–70.MathSciNetGoogle Scholar
  25. Hong Y. and White H. (2005). Asymptotic distribution theory for nonparametric entropy measures of serial dependence. Econometrica 73, 837–901.CrossRefMathSciNetzbMATHGoogle Scholar
  26. Hsiao C., Li Q. and Racine J. S. (2009). A consistent model specification test with mixed discrete and continuous data. Journal of Econometrics 140, 802–826.CrossRefMathSciNetGoogle Scholar
  27. Joe H. (1989). Estimation of entropy and other functionals of a multivariate density. Annals of the Institute of Statistical Mathematics 16, 683–697.CrossRefMathSciNetGoogle Scholar
  28. Li Q., Maasoumi E. and Racine J. S. (2009). A nonparametric test for equality of distributions with mixed categorical and continuous data. Journal of Econometrics 148, 186–200.CrossRefMathSciNetGoogle Scholar
  29. Li Q. and Racine J. (2003). Nonparametric estimation of distributions with categorical and continuous data. Journal of Multivariate Analysis 86, 266–292.CrossRefMathSciNetzbMATHGoogle Scholar
  30. Maasoumi E. and Racine J. C. (2009). A robust entropy-based test of asymmetry for discrete and continuous processes. Econometric Reviews 28, 246–261.CrossRefMathSciNetzbMATHGoogle Scholar
  31. Marshall J. and Walsh E. G. (1956). Physiological tremor. Journal of Neurology, Neurosurgery Psychiatry 19, 260–267.CrossRefGoogle Scholar
  32. Moddemeijer R. 1989a Delay estimation with application to electroencephalograms in epilepsy, Ph.D Thesis, Universiteit Twente, Enschede.Google Scholar
  33. Moddemeijer R. (1989b). On estimation of entropy and mutual information of continuous distributions. Signal Processing 16, 233–248.CrossRefMathSciNetGoogle Scholar
  34. Moddemeijer R. (1999). a statisitc to estimate the variance of the histogram-based mutual information estimator based on dependent pairs of observations. Signal Processing 75, 51–63.CrossRefzbMATHGoogle Scholar
  35. Müller H. G. (1991). Smooth optimum kernel estimators near endpoints. Biometrika 78, 521–530.CrossRefMathSciNetzbMATHGoogle Scholar
  36. Robinson P. M. (1991). Consistent nonparametric entropy-based testing. The Review of Economic Studies 58, 437–453.CrossRefzbMATHGoogle Scholar
  37. vShannon C. E. (1948). A mathematical theory of communication. Bell System Technical Journal 27, 379–423 & 623–656.CrossRefMathSciNetGoogle Scholar
  38. Sheather S. J. and Jones M. C. (1991). A reliable data-based bandwidth selection method for kernel density estimation. Journal of the Royal Statistical Society Series B 53, 683–690.MathSciNetzbMATHGoogle Scholar
  39. Stiles R. N. (1980). Mechanical and neural feedback factors in postural hand tremor of normal subjects. Journal of Neurophysiology 44, 40–59.Google Scholar
  40. Titterington D. M. (1980). A comparative study of kernel-based density estimates for categorical data. Technometrics 22, 259–268.CrossRefMathSciNetzbMATHGoogle Scholar
  41. Willie J. (1982a). Measuring the association of a time series and a point process. Journal of Applied Probability 19, 597–608.CrossRefMathSciNetzbMATHGoogle Scholar
  42. Willie J. (1982b). Covariation of a time series and a point process. Journal of Applied Probability 19, 609–618.CrossRefMathSciNetzbMATHGoogle Scholar

Copyright information

© Indian Statistical Institute 2015

Authors and Affiliations

  1. 1.Production, Quantitative Methods AreaIndian Institute Of ManagementAhmedabadIndia

Personalised recommendations