Skip to main content

A Portmanteau Local Feature Discrimination Approach to the Classification with High-dimensional Matrix-variate Data


Matrix-variate data arise in many scientific fields such as face recognition, medical imaging, etc. Matrix data contain important structure information which can be ruined by vectorization. Methods incorporating the structure information into analysis have significant advantages over vectorization approaches. In this article, we consider the problem of two-class classification with high-dimensional matrix-variate data, and propose a novel portmanteau-local-feature discrimination (PLFD) method. This method first identifies local discrimination features of the matrix variate and then pools them together to construct a discrimination rule. We investigated the theoretical properties of the PLFD method and established its asymptotic optimality. We carried out extensive numerical studies including simulation and real data analysis to compare this method with other methods available in the literature, which demonstrate that the PLFD method has a great advantage over the other methods in terms of misclassification rate.

This is a preview of subscription content, access via your institution.

Figure 1
Figure 2


  1. Anderson, T.W. (2003). An introduction to multivariate statistical analysis. Wiley, New York.

    MATH  Google Scholar 

  2. Guo, J. (2010). Simultaneous variable selection and class fusion for high-dimensional linear discriminant analysis. Biostatistics 11, 599–608.

    Article  Google Scholar 

  3. Gupta, A. and Nagar, D. (1999). Matrix variate distributions. no. 104 in monographs and surveys in pure and applied mathematics. Chapman & Hall/CRC, Florida.

    Google Scholar 

  4. Gupta, A.K. and Varga, T. (1993). Elliptically contoured models in statistics. Springer, Berlin.

    Book  Google Scholar 

  5. Hung, H. and Wang, C.C. (2013). Matrix variate logistic regression model with application to eeg data. Biostatistics 14, 189–202.

    Article  Google Scholar 

  6. Koltchinskii, V., Lounici, K., Tsybakov, A.B. et al. (2011). Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Stat. 39, 2302–2329.

    MathSciNet  MATH  Google Scholar 

  7. Lai, Z., Xu, Y., Yang, J., Tang, J. and Zhang, D. (2013). Sparse tensor discriminant analysis. IEEE Trans. Image Process. 22, 3904–3915.

    MathSciNet  Article  Google Scholar 

  8. Li, B., Kim, M.K. and Altman, N. (2010). On dimension folding of matrix- or array-valued statistical objects. Ann. Stat. 38, 1094–1121.

    MathSciNet  MATH  Google Scholar 

  9. Li, M and Yuan, B (2005). 2d-lda: A statistical linear discriminant analysis for image matrix, 26, p. 527–532.

  10. Li, Q. and Schonfeld, D. (2014). Multilinear discriminant analysis for higher-order tensor data classification. IEEE Trans. Pattern Anal. Mach. Intell. 36, 2524–2537.

    Article  Google Scholar 

  11. Luo, S. and Chen, Z. (2020). A procedure of linear discrimination analysis with detected sparsity structure for high-dimensional multi-class classification. J. Multivar. Anal. 179, 104641.

    MathSciNet  Article  Google Scholar 

  12. Masulli, F. and Rovetta, S. (2015). Clustering high-dimensional data. In Clustering High–Dimensional Data. Springer, Berlin, pp 1–13.

  13. Molstad, A.J. and Rothman, A.J. (2019). A penalized likelihood method for classification with matrix-valued predictors. J. Comput. Graph. Stat. 28, 11–22.

    MathSciNet  Article  Google Scholar 

  14. Pan, Y., Mai, Q. and Zhang, X. (2019). Covariate-adjusted tensor classification in high dimensions. J. Am. Stat. Assoc. 114, 1305–1319.

    MathSciNet  Article  Google Scholar 

  15. Recht, B., Fazel, M. and Parrilo, P.A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501.

    MathSciNet  Article  Google Scholar 

  16. Shao, J., Wang, Y., Deng, X. and Wang, S. (2011). Sparse linear discriminant analysis by thresholding for high dimensional data. Ann. Stat. 1241–1265.

  17. Witten, D.M. and Tibshirani, R. (2010). A framework for feature selection in clustering. J. Am. Stat. Assoc. 105, 713–726.

    MathSciNet  Article  Google Scholar 

  18. Xu, Z. (2020). Sparse linear discriminant analysis for high dimensional gaussian matrix-valued predictors. PhD thesis, Shanghai Jiao Tong University.

  19. Zhang, X.L., Begleiter, H., Porjesz, B., Wang, W. and Litke, A. (1995). Event related potentials during object recognition tasks. Brain Res. Bull.38, 531–538.

    Article  Google Scholar 

  20. Zheng, WS, Lai, JH and Li, SZ (2008). 1d-lda vs. 2d-lda: When is vector-based linear discriminant analysis better than matrix-based?. Patt. Recogn. 41, 2156–2172.

    Article  Google Scholar 

  21. Zhong, W. and Suslick, K.S. (2015). Matrix discriminant analysis with application to colorimetric sensor array data. Technometrics A J. Sta. Phys. Chem. Eng. Sci. 57, 524.

    MathSciNet  Google Scholar 

  22. Zhou, H. and Li, L. (2014). Regularized matrix regression. J. R. Stat. Soc. Ser. B-stat. Methodol. 76, 463–483.

    MathSciNet  Article  Google Scholar 

  23. Zhou, H., Li, L. and Zhu, H. (2013). Tensor regression with applications in neuroimaging data analysis. J. Am. Stat. Assoc. 108, 540–552.

    MathSciNet  Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Zengchao Xu.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

(PDF 134 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Xu, Z., Luo, S. & Chen, Z. A Portmanteau Local Feature Discrimination Approach to the Classification with High-dimensional Matrix-variate Data. Sankhya A (2021).

Download citation


  • Matrix-variate data
  • Classification
  • Feature selection
  • Asymptotic optimality.


  • Primary AMS62H; Secondary AMS62H30