Advertisement

Some Statistical Problems with High Dimensional Financial data

  • Arnab Chakrabarti
  • Rituparna SenEmail author
Chapter
Part of the New Economic Windows book series (NEW)

Abstract

For high dimensional data some of the standard statistical techniques do not work well. So modification or further development of statistical methods are necessary. In this paper we explore these modifications. We start with important problem of estimating high dimensional covariance matrix. Then we explore some of the important statistical techniques such as high dimensional regression, principal component analysis, multiple testing problems and classification. We describe some of the fast algorithms that can be readily applied in practice.

References

  1. 1.
    Benjamini, Y., Hochberg, Y.: Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B (Methodological) 289–300 (1995)Google Scholar
  2. 2.
    Bickel, P.J., Levina, E., et al.: Some theory for fisher’s linear discriminant function, naive bayes’, and some alternatives when there are many more variables than observations. Bernoulli 10(6), 989–1010 (2004)Google Scholar
  3. 3.
    Bickel, P.J., Levina, E., et al.: Covariance regularization by thresholding. Ann. Stat. 36(6), 2577–2604 (2008)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Boginski, V., Butenko, S., Pardalos, P.M.: Statistical analysis of financial networks. Comput. Stat. Data Anal. 48(2), 431–443 (2005)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Cai, T., Liu, W.: Adaptive thresholding for sparse covariance matrix estimation. J. Am. Stat. Assoc. 106(494), 672–684 (2011)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Chamberlain, G., Rothschild, M.: Arbitrage, factor structure, and mean-variance analysis on large asset markets (1982)Google Scholar
  7. 7.
    Chen, Y., Wiesel, A., Hero, A.O.: Shrinkage estimation of high dimensional covariance matrices. In: IEEE International Conference on Acoustics, Speech and Signal Processing, 2009. ICASSP, pp. 2937–2940. IEEE (2009)Google Scholar
  8. 8.
    Dempster, A.P.: Covariance selection. Biometrics 157–175 (1972)Google Scholar
  9. 9.
    Drton, M., Perlman, M.D.: Multiple testing and error control in gaussian graphical model selection. Stat. Sci. 430–449 (2007)Google Scholar
  10. 10.
    Efron, B.: Large-scale inference: empirical Bayes methods for estimation, testing, and prediction, vol. 1. Cambridge University Press, Cambridge (2012)Google Scholar
  11. 11.
    Fisher, T.J., Sun, X.: Improved stein-type shrinkage estimators for the high-dimensional multivariate normal covariance matrix. Comput. Stat. Data Anal. 55(5), 1909–1918 (2011)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Friedman, J., Hastie, T., Tibshirani, R.: Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9(3), 432–441 (2008)CrossRefGoogle Scholar
  13. 13.
    Friedman, J., Hastie, T., Tibshirani, R., et al.: Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). Ann. Stat. 28(2), 337–407 (2000)CrossRefGoogle Scholar
  14. 14.
    Haff, L.: Empirical bayes estimation of the multivariate normal covariance matrix. Ann. Stat. 586–597 (1980)Google Scholar
  15. 15.
    Hall, P., Marron, J.S., Neeman, A.: Geometric representation of high dimension, low sample size data. J. R. Stat. Soc.: Ser. B (Statistical Methodology) 67(3), 427–444 (2005)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Jolliffe, I.T., Trendafilov, N.T., Uddin, M.: A modified principal component technique based on the lasso. J. Comput. Graph. Stat. 12(3), 531–547 (2003)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Ledoit, O., Wolf, M.: A well-conditioned estimator for large-dimensional covariance matrices. J. Multivar. Anal. 88(2), 365–411 (2004)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Lin, S.P.: A monte carlo comparison of four estimators for a covariance matrix. Multivar. Anal. 6, 411–429 (1985)MathSciNetGoogle Scholar
  19. 19.
    Lin, Y.: Support vector machines and the bayes rule in classification. Data Min. Knowl. Discov. 6(3), 259–275 (2002)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Naul, B., Rajaratnam, B., Vincenzi, D.: The role of the isotonizing algorithm in stein’s covariance matrix estimator. Comput. Stat. 31(4), 1453–1476 (2016)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Papadimitriou, T., Gogas, P., Sarantitis, G.A.: European business cycle synchronization: a complex network perspective. In: Network Models in Economics and Finance, pp. 265–275. Springer, Berlin (2014)Google Scholar
  22. 22.
    Pourahmadi, M.: High-Dimensional Covariance Estimation: with High-dimensional Data, vol. 882. Wiley, New York (2013)Google Scholar
  23. 23.
    Rothman, A.J.: Positive definite estimators of large covariance matrices. Biometrika 99(3), 733–740 (2012)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Rothman, A.J., Levina, E., Zhu, J.: Sparse multivariate regression with covariance estimation. J. Comput. Graph. Stat. 19(4), 947–962 (2010)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Schäfer, J., Strimmer, K.: A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Stat. Appl. Genet. Mol. Biol. 4(1) (2005)Google Scholar
  26. 26.
    Stein, C.: Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. Technical report, Stanford University Stanford, US (1956)Google Scholar
  27. 27.
    Stein, C.: Estimation of a covariance matrix, rietz lecture. In: 39th Annual Meeting IMS, Atlanta, GA (1975)Google Scholar
  28. 28.
    Stein, C., et al.: Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. Proc. Third Berkeley Symp. Math. Stat. Probab. 1, 197–206 (1956)MathSciNetzbMATHGoogle Scholar
  29. 29.
    Vandewalle, N., Brisbois, F., Tordoir, X., et al.: Non-random topology of stock markets. Quant. Financ. 1(3), 372–374 (2001)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Witten, D.M., Tibshirani, R., Hastie, T.: A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics 10(3), 515–534 (2009)CrossRefGoogle Scholar
  31. 31.
    Xue, L., Ma, S., Zou, H.: Positive-definite 1-penalized estimation of large covariance matrices. J. Am. Stat. Assoc. 107(500), 1480–1491 (2012)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Yuan, M., Lin, Y.: Model selection and estimation in the gaussian graphical model. Biometrika 94(1), 19–35 (2007)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Indian Statistical InstituteChennaiIndia

Personalised recommendations