Sufficient Dimension Reduction for Tensor Data

  • Yiwen Liu
  • Xin Xing
  • Wenxuan Zhong
Part of the Springer Handbooks of Computational Statistics book series (SHCS)


With the rapid development of science and technology, a large volume of array data has been collected in areas such as genomics, finance, image processing, and Internet search. How to extract useful information from massive data becomes the key issue nowadays. In spite of the urgent need for statistical tools to deal with such data, there are limited methods that can fully address the high-dimensional problem. In this chapter, we review the general setting of sufficient dimension reduction framework and its generalization to tensor data. Tensor is a multi-way array, and its usage is becoming more and more important with the advancement of social and behavioral science, chemistry, and imaging technology. The vector-based statistical methods can be applied to tensor data by vectorizing a tensor into a vector. However, vectorized tensor usually has a large dimension which may largely exceed the number of samples. To preserve the tensor structure and reduce the dimensionality simultaneously, we revisit the tensor sufficient dimension reduction model and apply it to colorimetric sensor arrays. Tensor sufficient dimension reduction method is simple but powerful and exhibits a competent empirical performance in real data analysis.


Sufficient dimension reduction Tensor analysis Iterative estimation Colorimetric sensor arrays 



This work was supported by National Science Foundation grant DMS-1222718 and DMS-1055815, National Institutes of Health grant R01 GM113242, and National Institute of General Medical Sciences grant R01GM122080.


  1. Akaike H (1987) Factor analysis and AIC. Psychometrika 52(3):317–332MathSciNetCrossRefGoogle Scholar
  2. Bendel RB, Afifi AA (1977) Comparison of stopping rules in forward “stepwise” regression. J Am Stat Assoc 72(357):46–53zbMATHGoogle Scholar
  3. Breiman L (1995) Better subset regression using the nonnegative garrote. Technometrics 37(4):373–384MathSciNetCrossRefGoogle Scholar
  4. Bura E, Cook RD (2001) Extending sliced inverse regression: the weighted chi-squared test. J Am Stat Assoc 96(455):996–1003MathSciNetCrossRefGoogle Scholar
  5. Chen C-H, Li K-C (1998) Can SIR be as popular as multiple linear regression? Stat Sin 8(2):289–316Google Scholar
  6. Cook RD, Weisberg S (2009) An introduction to regression graphics, vol 405. Wiley, New YorkGoogle Scholar
  7. Cook RD et al (2004) Testing predictor contributions in sufficient dimension reduction. Ann Stat 32(3):1062–1092MathSciNetCrossRefGoogle Scholar
  8. Ding S, Cook RD (2015) Tensor sliced inverse regression. J Multivar Anal 133:216–231MathSciNetCrossRefGoogle Scholar
  9. Efroymson M (1960) Multiple regression analysis. Math Methods Digit Comput 1:191–203Google Scholar
  10. Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 96(456):1348–1360MathSciNetCrossRefGoogle Scholar
  11. Fan J, Peng H, et al (2004) Nonconcave penalized likelihood with a diverging number of parameters. Ann Stat 32(3):928–961Google Scholar
  12. Friedman JH, Tukey JW (1974) A projection pursuit algorithm for exploratory data analysis. IEEE Trans Comput 23(9):881–890CrossRefGoogle Scholar
  13. Harshman RA, Lundy ME (1984) The PARAFAC model for three-way factor analysis and multidimensional scaling. In: Research methods for multimode data analysis. Praeger, New York, pp 122–215Google Scholar
  14. Huber PJ (1985) Projection pursuit. Ann Stat 13(2):435–475MathSciNetCrossRefGoogle Scholar
  15. Kolda TG, Bader BW (2009) Tensor decompositions and applications. SIAM Rev 51(3):455–500MathSciNetCrossRefGoogle Scholar
  16. Kroonenberg PM (2008) Applied multiway data analysis, vol 702. Wiley, New YorkGoogle Scholar
  17. Kruskal JB (1969) Toward a practical method which helps uncover the structure of a set of multivariate observations by finding the linear transformation which optimizes a new ‘index of condensation’. In: Statistical computation. Academic, New York, pp 427–440CrossRefGoogle Scholar
  18. Li K-C (1991) Sliced inverse regression for dimension reduction. J Am Stat Assoc 86(414):316–327MathSciNetCrossRefGoogle Scholar
  19. Li B, Kim MK, Altman N (2010) On dimension folding of matrix-or array-valued statistical objects. Ann Stat 38(2):1094–1121MathSciNetCrossRefGoogle Scholar
  20. Miller A (2002) Subset selection in regression. CRC Press, Boca RatonCrossRefGoogle Scholar
  21. Rakow NA, Suslick KS (2000) A colorimetric sensor array for odour visualization. Nature 406(6797):710–713CrossRefGoogle Scholar
  22. Smilde A, Bro R, Geladi P (2005) Multi-way analysis: applications in the chemical sciences. Wiley, New YorkGoogle Scholar
  23. Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc Ser B Methodol 58(1):267–288Google Scholar
  24. Tucker LR (1951) A method for synthesis of factor analysis studies. Technical report, DTIC DocumentGoogle Scholar
  25. Tucker LR (1966) Some mathematical notes on three-mode factor analysis. Psychometrika 31(3):279–311MathSciNetCrossRefGoogle Scholar
  26. Wilkinson L (1979) Tests of significance in stepwise regression. Psychol Bull 86(1):168CrossRefGoogle Scholar
  27. Yuan M, Lin Y (2007) On the non-negative garrotte estimator. J R Stat Soc Ser B Stat Methodol 69(2):143–161MathSciNetCrossRefGoogle Scholar
  28. Zeng P, Zhu Y (2010) An integral transform method for estimating the central mean and central subspaces. J Multivar Anal 101(1):271–290MathSciNetCrossRefGoogle Scholar
  29. Zhang HH, Lu W (2007) Adaptive lasso for cox’s proportional hazards model. Biometrika 94(3):691–703MathSciNetCrossRefGoogle Scholar
  30. Zhong W, Suslick K (2014) Matrix discriminant analysis with application to colorimetric sensor array data. Technometrics 57(4):524–534MathSciNetCrossRefGoogle Scholar
  31. Zhong W, Zhang T, Zhu Y, Liu JS (2012) Correlation pursuit: forward stepwise variable selection for index models. J R Stat Soc Ser B Stat Methodol 74(5):849–870MathSciNetCrossRefGoogle Scholar
  32. Zhong W, Xing X, Suslick K (2015) Tensor sufficient dimension reduction. Wiley Interdiscip Rev Comput Stat 7(3):178–184MathSciNetCrossRefGoogle Scholar
  33. Zhu L, Miao B, Peng H (2006) On sliced inverse regression with high-dimensional covariates. J Am Stat Assoc 101(474):630–643MathSciNetCrossRefGoogle Scholar
  34. Zou H (2006) The adaptive lasso and its oracle properties. J Am Stat Assoc 101(476):1418–1429MathSciNetCrossRefGoogle Scholar
  35. Zou H, Hastie T (2005) Regularization and variable selection via the elastic net. J R Stat Soc Ser B Stat Methodol 67(2):301–320MathSciNetCrossRefGoogle Scholar
  36. Zou H, Zhang HH (2009) On the adaptive elastic-net with a diverging number of parameters. Ann Stat 37(4):1733MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.University of GeorgiaAthensUSA

Personalised recommendations