Skip to main content
Log in

A sparse neighborhood preserving non-negative tensor factorization algorithm for facial expression recognition

  • Theoretical Advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

In this paper, a novel sparse neighborhood preserving non-negative tensor factorization (SNPNTF) algorithm is proposed for facial expression recognition. It is derived from non-negative tensor factorization (NTF), and it works in the rank-one tensor space. A sparse constraint is adopted into the objective function, which takes the optimization step in the direction of the negative gradient, and then projects onto the sparse constrained space. To consider the spatial neighborhood structure and the class-based discriminant information, a neighborhood preserving constraint is adopted based on the manifold learning and graph preserving theory. The Laplacian graph which encodes the spatial information in the face samples and the penalty graph which considers the pre-defined class information are considered in this constraint. By using it, the obtained parts-based representations of SNPNTF vary smoothly along the geodesics of the data manifold and they are more discriminant for recognition. SNPNTF is a quadratic convex function in the tensor space, and it could converge to the optimal solution. The gradient descent method is used for the optimization of SNPNTF to ensure the convergence property. Experiments are conducted on the JAFFE database, the Cohn–Kanade database and the AR database. The results demonstrate that SNPNTF provides effective facial representations and achieves better recognition performance, compared with non-negative matrix factorization, NTF and some variant algorithms. Also, the convergence property of SNPNTF is well guaranteed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Calder AJ, Burton AM, Miller P, Young AW, Akamatsu S (2001) A principal component analysis of facial expression. Vision Res 41(9):1179–1208

    Article  Google Scholar 

  2. Yanmbor WS (2000) Analysis of PCA-based and Fisher discriminant-based image recognition algorithms, Computer Science Department, Colorado State University, Fort Collins, CO, M.S. Thesis Tech. Rep. CS-00-103

  3. He XF, Cai D, Yan SC, Zhang HJ (2005) Neighborhood preserving embedding. In: Proceedings of IEEE International Conference on Computer Vision, pp 1208–1213

  4. Turk M, Pentland S (1991) Face recognition using eigenfaces. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp 586–591

  5. Lee DD, Seung HS (1999) Learning the parts of objects by nonnegative matrix factorization. Nature 401(21):788–791

    Google Scholar 

  6. Li SZ, Hou X, Zhang H, Cheng Q (2001) Learning spatially localized, parts-based representation. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp 207–212

  7. Hoyer PO (2002) Non-negative sparse coding. In: Proceedings of IEEE Workshop on Neural Networks for Signal Processing, pp 557–565

  8. Cai D, He X, Wu X, Han J (2008) Non-negative matrix factorization on manifold. In: Proceedings of IEEE International Conference on Data Mining, pp 63–72

  9. Zafeiriou S, Tefas A, Buciu I, Pitas I (2006) Exploiting discriminant information in nonnegative matrix factorization with application to frontal face verification. IEEE Trans Neural Netw 17(3):683–695

    Article  Google Scholar 

  10. Cyganek B (2003) Object detection and recognition in digital images: theory and practice. Wiley, New York

    MATH  Google Scholar 

  11. Shashua A, Hazan T (2005) Non-negative tensor factorization with applications to statistics and computer vision. In: Proceedings of International Conference on Machine Learning, pp 50–57

  12. Yan SC, Xu D, Zhang B, Zhang HJ, Yang Q, Lin S (2007) Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell 29(1):40–51

    Article  Google Scholar 

  13. Liang D, Yang J, Zheng ZL, Chang YC (2005) A facial expression recognition system based on supervised locally linear embedding. Pattern Recogn Lett 26(17):2374–2389

    Article  Google Scholar 

  14. He XF, Niyogi P (2003) Locality preserving projections. In: Proceedings of Advances Neural Information Processing Systems, pp 153–160

  15. Shashua A, Levin A (2001) Linear image coding for regression and classification using the tensor-rank principle. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp 42–49

  16. Zhi RC, Flierl M, Ruan QQ, Kleijin WB (2011) Graph-preserving sparse nonnegative matrix factorization with application to facial expression recognition. IEEE Trans Sys Man Cybern Part B Cybern 41(1):38–52

    Article  Google Scholar 

  17. Zheng M, Bu J, Chen C, Wang C, Zhang L, Qiu G, Cai D (2011) Graph regularized sparse coding for image representation. IEEE Trans Image Process 20(5):1327–2048

    Article  MathSciNet  Google Scholar 

  18. Kolda T, Bader B (2009) Tensor decompositions and applications. SIAM Rev 51(3):455–500

    Article  MathSciNet  MATH  Google Scholar 

  19. Lyons M, Akamatsu S, Kamachi M, Gyoba J (1998) Coding facial expressions with Gabor wavelets. In: Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, pp 200–205

  20. Kanade T, Cohn J, Tian Y (2000) Comprehensive database for facial expression analysis. In: Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, pp 46–53

  21. Mehrabian A (1968) Communication without words. Psychol Today 2(4):53–56

    Google Scholar 

  22. Shan C, Gong S, McOwan PW (2006) A comprehensive empirical study on linear subspace methods for facial expression analysis. In: Proceedings of International Conference on Computer Vision and Pattern Recognition, pp 153–158

  23. He X, Yan S, Hu Y, Niyogi P, Zhang H (2005) Face recognition using Laplacianfaces. IEEE Trans Pattern Anal Mach Intell 27(3):328–340

    Article  Google Scholar 

  24. Tao D, Li X, Wu X, Maybank S (2008) Tensor rank one discriminant analysis—a convergent method for discriminative multilinear subspace selection. Neurocomputing 71:1866–1882

    Article  Google Scholar 

  25. Hazan T, Polakm S, Shashua A (2005) Sparse image coding using a 3D non-negative tensor factorization. In: Proceedings of IEEE International Conference on Computer Vision, vol 1. pp 50–57

  26. Liu S, Ruan Q, Wang C, An G (2012) Tensor rank one differential graph preserving analysis for facial expression recognition. Image Vis Comput 30(8):535–545

    Article  Google Scholar 

  27. Field D (1994) What is the goal of sensory coding? Neural Comput 6(4):559–601

    Article  Google Scholar 

  28. Ellison JW, Massaro DW (1997) Feature evaluation, integration, and judgment of facial affect. J Exp Psychol Hum Percep Perform 23(1):213–226

    Article  Google Scholar 

  29. Li XL, Lin S, Yan SC, Xu D (2008) Discriminant locally linear embedding with high-order tensor data. IEEE Trans Sys Man Cybern Part B Cybern 38(2):342–352

    Article  Google Scholar 

  30. Liu S, Ruan Q (2011) Orthogonal tensor neighborhood preserving embedding for facial expression recognition. Pattern Recogn 44(2011):1497–1513

    Article  MATH  Google Scholar 

  31. Wang Y, Jia Y, Hu C, Turk M (2005) Non-negative matrix factorization framework for face recognition. Int J Pattern Recogn Artif Intell 19(4):495–511

    Article  Google Scholar 

  32. Zdunek R (2011) Uni-orthogonal nonnegative tucker decomposition for supervised image classification. In: Image Analysis and Processing, Lecture Notes in Computer Science, vol 6978. pp 88–97

  33. Wu F, Tan X, Yang Y, Tao D, Tang S, Zhuang Y (2013) Supervised nonnegative tensor factorization with maximum-margin constraint. Twenty-seventh Conf Art Intell AAAI, pp 962–968

  34. Zafeiriou S (2009) Discriminant nonnegative tensor factorization algorithm. IEEE Trans Neural Netw 20(2):217–235

    Article  MATH  Google Scholar 

  35. Olshausen BA, Field DJ (1996) Emergence of simple-cell receptive field properties by learning a sparse code for naturalimages. Nature 381(13)

  36. Martinez A, Benavente R (1998) The AR face database, CVC Technical Report 24

  37. Naseem I, Togneri R, Bennamoun M (2010) Linear regression for face recognition. IEEE Trans Pattern Anal Mach Intell 32(11):2106–2112

    Article  Google Scholar 

  38. He X, Yan S, Hu Y, Niyogi P, Zhang H (2005) Face recognition using Laplacianfaces. IEEE Trans Pattern Anal Mach Intell 27(3):328–340

    Article  Google Scholar 

  39. Vasilescu MAO, Terzopoulos D (2002) Multilinear image analysis for facial recognition. IEEE Int Conf Pattern Recogn ICPR 2:511–514

    Google Scholar 

  40. Lin CJ (2005) Projected gradient methods for non-negative matrix factorization. Dept. Comput. Sci. Nat. Taiwan Univ, Taipei

    Google Scholar 

  41. Phan AH, Cichocki A (2010) Tensor decompositions for feature extraction and classification of high dimensional datasets. IEICE Nonlin Theory Appl 1(1):37–68

    Article  Google Scholar 

  42. Chu M, Diele F, Plemmons R, Ragni S (2004) Optimality, computation, and interpretation of nonnegative matrix factorizations, Technical report, North Carolina State University, http://www4.ncsu.edu/mthu/Research/Papers/nnmf.pdf. 2004

  43. Gonzalez E, Zhang Y (2005) Accelerating the Lee-Seung algorithm for non-negative matrix factorization, Technical report, Department of Computational and Applied Mathematics, Rice University, http://www.caam.rice.edu/tech_reports/2005/TR05-02.ps. 2005

  44. Wang Y, Zhang Y (2013) Nonnegative matrix factorization: a comprehensive review. IEEE Trans Knowl Data Eng 25(6):1336–1353

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported partly by the National Natural Science Foundation of China (61370127, 61472030), the Fundamental Research Funds for the Central Universities (2014JBZ004) and Beijing Higher Education Young Elite Teacher Project (YETP0544).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gaoyun An.

Ethics declarations

Conflict of interest

We declare that we do not have any possible conflicts of interest. We have no financial and personal relationships with other people or organizations that can inappropriately influence our work. There is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of this manuscript.

Appendix

Appendix

The calculations about Eq. (25)

Firstly, we discuss the calculation of \(\nabla f_{{{\mathbf{V,Z}}}} ({\mathbf{U}}^{(t)} )\) and \(\nabla f_{{{\mathbf{U,Z}}}} ({\mathbf{V}}^{(t)} )\). The objective function of SNPNTF is now written as:

$$\begin{aligned} {\text{f}} & = \frac{1}{2}\left\| {{\mathbf{A}} - \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{u}_{\text{r}} \otimes \varvec{v}_{\text{r}} \otimes \varvec{z}_{\text{r}} } } \right\|_{F}^{2} + \frac{1}{2}\varepsilon ||\varvec{u}||_{1} + \frac{1}{2}\gamma ||\varvec{v}||_{1} + \frac{1}{2}\lambda {\text{tr}}({\mathbf{Z}}^{T} {\mathbf{LZ}}){ - }\frac{1}{2}\sigma {\text{tr}}({\mathbf{Z}}^{T} {\mathbf{L}}^{p} {\mathbf{Z}}) \\ & = \frac{1}{2}\left\langle {{\mathbf{A}} - \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{u}_{\text{r}} \otimes \varvec{v}_{\text{r}} \otimes \varvec{z}_{\text{r}} } ,{\mathbf{A}}{ - }\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{u}_{\text{r}} \otimes \varvec{v}_{\text{r}} \otimes \varvec{z}_{\text{r}} } } \right\rangle + \frac{1}{2}\varepsilon ||\varvec{u}||_{1} + \frac{1}{2}\gamma ||\varvec{v}||_{1} + \frac{1}{2}\lambda \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {{\mathbf{z}}_{\text{r}}^{T} {\mathbf{Lz}}_{\text{r}} } { - }\frac{1}{2}\sigma \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {{\mathbf{z}}_{\text{r}}^{T} {\mathbf{L}}^{p} {\mathbf{z}}_{\text{r}} } \\ \end{aligned}$$
(36)

The differential of f is

$$\begin{aligned} d\left( {\text{f}} \right) & = {\frac{1}{2}d {\left\langle {{\mathbf{A}} - \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{u}_{\text{r}} \otimes \varvec{v}_{\text{r}} \otimes \varvec{z}_{\text{r}} } ,{\mathbf{A}} - \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{u}_{\text{r}} \otimes \varvec{v}_{\text{r}} \otimes \varvec{z}_{\text{r}} } } \right\rangle } + \frac{1}{2}d(\varepsilon ||\varvec{u}||_{1} ) + \frac{1}{2}d(\gamma ||\varvec{v}||_{1} )} \\&\quad + { \frac{1}{2}d(\lambda \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{z}_{\text{r}}^{T} {\mathbf{L}}\varvec{z}_{\text{r}} } ) - \frac{1}{2}d(\sigma \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{z}_{\text{r}}^{T} {\mathbf{L}}^{p} \varvec{z}_{\text{r}} } )} \\ & = {\left\langle {{\mathbf{A}} - \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{u}_{\text{r}} \otimes \varvec{v}_{\text{r}} \otimes \varvec{z}_{\text{r}} } ,d({\mathbf{A}} - \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{u}_{\text{r}} \otimes \varvec{v}_{\text{r}} \otimes \varvec{z}_{\text{r}} } )} \right\rangle + \frac{1}{2}d(\varepsilon ||\varvec{u}||_{1} ) + \frac{1}{2}d(\gamma ||\varvec{v}||_{1} )} \\&\quad + { \frac{1}{2}d(\lambda \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{z}_{\text{r}}^{T} {\mathbf{L}}\varvec{z}_{\text{r}} } ) - \frac{1}{2}d(\sigma \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{z}_{\text{r}}^{T} {\mathbf{L}}^{p} \varvec{z}_{\text{r}} } )}\\ \end{aligned}$$
(37)

To calculate \(\nabla f_{{{\mathbf{V,Z}}}} ({\mathbf{U}}^{(t)} )\), the differential of f along \(\varvec{u}_{s} (\forall s,1 \le s \le R)\) is

$$\begin{aligned} d({\text{f}}_{{\varvec{u}_{\text{s}} }} ) &= \left\langle {{\mathbf{A}} - \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{u}_{\text{r}} \otimes \varvec{v}_{\text{r}} \otimes \varvec{z}_{\text{r}} } , - d(\varvec{u}_{\text{s}} ) \otimes \varvec{v}_{\text{s}} \otimes \varvec{z}_{\text{s}} } \right\rangle + \varepsilon + 0 + 0- 0\\ &= \left\langle {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{u}_{\text{r}} \otimes \varvec{v}_{\text{r}} \otimes \varvec{z}_{\text{r}} } ,d(\varvec{u}_{\text{s}} ) \otimes \varvec{v}_{\text{s}} \otimes \varvec{z}_{\text{s}} } \right\rangle - \left\langle {{\mathbf{A}},d(\varvec{u}_{\text{s}} ) \otimes \varvec{v}_{\text{s}} \otimes \varvec{z}_{\text{s}} } \right\rangle + \varepsilon \\ \end{aligned}$$
(38)

In Eq. (38), the sparseness constraint acts as \(\varepsilon\). It means the value of the coefficient \(\varepsilon\) controls the sparse degree, which proves the analysis in Sect. 3.2 mathematically.

Similarly, the partial differential for \(\varvec{u}_{s}^{p} (1 \le p \le m_{1} )\) is

$$\frac{{\partial {\text{f}}}}{{\partial {\text{u}}_{\text{s}}^{\text{p}} }} = \left\langle {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\varvec{u}_{\text{r}} \otimes \varvec{v}_{\text{r}} \otimes \varvec{z}_{\text{r}} } ,\varvec{e}^{\text{p}} \otimes \varvec{v}_{\text{s}} \otimes \varvec{z}_{\text{s}} } \right\rangle - \left\langle {{\mathbf{A}},\varvec{e}^{\text{p}} \otimes \varvec{v}_{\text{s}} \otimes \varvec{z}_{\text{s}} } \right\rangle + \varepsilon$$
(39)

where the pth element in \(\varvec{e}^{\text{p}} \in {\mathbb{R}}^{{{\text{m}}_{ 1} }}\) is 1, others are 0 s. That is \((\varvec{e}^{\text{p}} )_{\text{p}} = 1\) and \((\varvec{e}^{\text{p}} )_{{{\text{k}} \ne {\text{p}}}} = 0\). According to Definition 2.1, for any order tensors \({\mathbf{A}}_{ 1} ,{\mathbf{A}}_{ 2} \in {\mathbb{R}}^{{{\text{a}}_{1} \times {\text{a}}_{2} \cdots \times {\text{a}}_{\text{n}} }}\),\({\mathbf{B}}_{ 1} ,{\mathbf{B}}_{ 2} \in {\mathbb{R}}^{{{\text{b}}_{1} \times {\text{b}}_{2} \cdots \times {\text{b}}_{\text{n}} }}\), there is \(\left\langle {{\mathbf{A}}_{ 1} \otimes {\mathbf{B}}_{ 1} ,{\mathbf{A}}_{ 2} \otimes {\mathbf{B}}_{ 2} } \right\rangle = \left\langle {{\mathbf{A}}_{ 1} ,{\mathbf{A}}_{ 2} } \right\rangle \left\langle {{\mathbf{B}}_{ 1} ,{\mathbf{B}}_{ 2} } \right\rangle\). Then, Eq. (39) could be written as:

$$\begin{aligned} \frac{{\partial {\text{f}}}}{{\partial {\text{u}}_{\text{s}}^{\text{p}} }} & = \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left( {\left\langle {\varvec{u}_{\text{r}} \varvec{,e}^{\text{p}} } \right\rangle \left\langle {{\mathbf{v}}_{\text{r}} \otimes {\mathbf{z}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} \otimes {\mathbf{z}}_{\text{s}} } \right\rangle } \right)} - \left( {\sum\nolimits_{{{\text{k}} = 1,{\text{q}} = 1,{\text{i}} = 1}}^{{{\text{m}}_{1} , {\text{m}}_{2} ,{\text{N}}}} {{\text{A}}_{\text{kqi}} (\varvec{e}^{\text{p}} )_{\text{k}} {\text{v}}_{\text{s}}^{\text{q}} {\text{z}}_{\text{s}}^{\text{i}} } } \right) + \varepsilon \\ & = \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left( {\left\langle {\varvec{u}_{\text{r}} \varvec{,e}^{\text{p}} } \right\rangle \left\langle {{\mathbf{v}}_{\text{r}} \otimes {\mathbf{z}}_{\text{r}} } \right\rangle \left\langle {{\mathbf{v}}_{\text{s}} \otimes {\mathbf{z}}_{\text{s}} } \right\rangle } \right)} - \left( {\sum\nolimits_{{{\text{q}} = 1,{\text{i}} = 1}}^{{{\text{m}}_{2} ,{\text{N}}}} {{\text{A}}_{\text{pqi}} (\varvec{e}^{\text{p}} )_{\text{p}} {\text{v}}_{\text{s}}^{\text{q}} {\text{z}}_{\text{s}}^{\text{i}} } + \sum\nolimits_{{{\text{k}} \ne {\text{p}}}} {{\text{A}}_{\text{kqi}} (\varvec{e}^{\text{p}} )_{\text{k}} {\text{v}}_{\text{s}}^{\text{q}} {\text{z}}_{\text{s}}^{\text{i}} } } \right) + \varepsilon \\ & = \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left( {\left\langle {\varvec{u}_{\text{r}} \varvec{,e}^{\text{p}} } \right\rangle \left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{z}}_{\text{r}} ,{\mathbf{z}}_{\text{s}} } \right\rangle } \right)} - \left( {\sum\nolimits_{{{\text{q}} = 1,{\text{i}} = 1}}^{{{\text{m}}_{2} ,{\text{N}}}} {{\text{A}}_{\text{pqi}} {\text{v}}_{\text{s}}^{\text{q}} {\text{z}}_{\text{s}}^{\text{i}} + 0} } \right) + \varepsilon \\ & = \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left( {\left( {{\text{u}}_{\text{r}}^{\text{p}} (\varvec{e}^{\text{p}} )_{\text{p}} + \sum\nolimits_{{{\text{k}} \ne {\text{p}}}} {{\text{u}}_{\text{r}}^{\text{k}} (\varvec{e}^{\text{p}} )_{\text{k}} } } \right)\left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{z}}_{\text{r}} ,{\mathbf{z}}_{\text{s}} } \right\rangle } \right)} - \sum\nolimits_{{{\text{q}} = 1,{\text{i}} = 1}}^{{{\text{m}}_{2} ,{\text{N}}}} {{\text{A}}_{\text{pqi}} {\text{v}}_{\text{s}}^{\text{q}} {\text{z}}_{\text{s}}^{\text{i}} } + \varepsilon \\ & = \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left( {{\text{u}}_{\text{r}}^{\text{p}} \left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{z}}_{\text{r}} ,{\mathbf{z}}_{\text{s}} } \right\rangle } \right)} - \sum\nolimits_{{{\text{q}} = 1,{\text{i}} = 1}}^{{{\text{m}}_{2} ,{\text{N}}}} {{\text{A}}_{\text{pqi}} {\text{v}}_{\text{s}}^{\text{q}} {\text{z}}_{\text{s}}^{\text{i}} } + \varepsilon \\ \end{aligned}$$
(40)

According to Eq. (25), the update rule for \({\text{u}}_{\text{s}}^{\text{p}}\) is

$$\begin{aligned} {\text{u}}_{\text{s}}^{\text{p}} & = {\text{u}}_{\text{s}}^{\text{p}} - \mu ({\text{u}}_{\text{s}}^{\text{p}} )\frac{{\partial {\text{f}}}}{{\partial {\text{u}}_{\text{s}}^{\text{p}} }} \\ & = {\text{u}}_{\text{s}}^{\text{p}} - \mu ({\text{u}}_{\text{s}}^{\text{p}} )\left( {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left( {{\text{u}}_{\text{r}}^{\text{p}} \left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{z}}_{\text{r}} ,{\mathbf{z}}_{\text{s}} } \right\rangle } \right)} - \sum\nolimits_{{{\text{q}} = 1,{\text{i}} = 1}}^{{{\text{m}}_{2} , {\text{N}}}} {{\text{A}}_{\text{pqi}} {\text{v}}_{\text{s}}^{\text{q}} {\text{z}}_{\text{s}}^{\text{i}} } + \varepsilon } \right) \\ \end{aligned}$$
(41)

To confirm the non-negative of \({\text{u}}_{\text{s}}^{\text{p}}\), the update step \(\mu ({\text{u}}_{\text{s}}^{\text{p}} )\) is set as:

$$\mu ({\text{u}}_{\text{s}}^{\text{p}} ) = {{{\text{u}}_{\text{s}}^{\text{p}} } \mathord{\left/ {\vphantom {{{\text{u}}_{\text{s}}^{\text{p}} } {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left( {{\text{u}}_{\text{r}}^{\text{p}} \left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{z}}_{\text{r}} ,{\mathbf{z}}_{\text{s}} } \right\rangle } \right)} }}} \right. \kern-0pt} {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left( {{\text{u}}_{\text{r}}^{\text{p}} \left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{z}}_{\text{r}} ,{\mathbf{z}}_{\text{s}} } \right\rangle } \right)} }}$$
(42)

If the denominator is close to 0, Eq. (42) would lead to instable results. Therefore, an extra positive additive value is adopted in the denominator which is set to be 0.01. In the following part of this paper, the additive values are used in all denominators.

Now the update equation for \({\text{u}}_{\text{s}}^{\text{p}}\) is

$$\begin{aligned} {\text{u}}_{\text{s}}^{\text{p}} & = {\text{u}}_{\text{s}}^{\text{p}} {{(\sum\nolimits_{{{\text{q}} = 1,{\text{i}} = 1}}^{{{\text{m}}_{2} , {\text{N}}}} {{\text{A}}_{\text{pqi}} {\text{v}}_{\text{s}}^{\text{q}} {\text{z}}_{\text{s}}^{\text{i}} - \varepsilon } )} \mathord{\left/ {\vphantom {{(\sum\nolimits_{{{\text{q}} = 1,{\text{i}} = 1}}^{{{\text{m}}_{2} , {\text{N}}}} {{\text{A}}_{\text{pqi}} {\text{v}}_{\text{s}}^{\text{q}} {\text{z}}_{\text{s}}^{\text{i}} - \varepsilon } )} {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left( {{\text{u}}_{\text{r}}^{\text{p}} \left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{z}}_{\text{r}} ,{\mathbf{z}}_{\text{s}} } \right\rangle } \right)} }}} \right. \kern-0pt} {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left( {{\text{u}}_{\text{r}}^{\text{p}} \left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{z}}_{\text{r}} ,{\mathbf{z}}_{\text{s}} } \right\rangle } \right)} }} \\ & = {{{\text{u}}_{\text{s}}^{\text{p}} (\varvec{v}_{\text{s}}^{T} A_{\text{p;;}} {\mathbf{z}}_{\text{s}} - \varepsilon )} \mathord{\left/ {\vphantom {{{\text{u}}_{\text{s}}^{\text{p}} (\varvec{v}_{\text{s}}^{T} A_{\text{p;;}} {\mathbf{z}}_{\text{s}} - \varepsilon )} {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {({\text{u}}_{\text{r}}^{\text{p}} ({\mathbf{v}}_{\text{r}}^{T} {\mathbf{v}}_{\text{s}} )({\mathbf{z}}_{\text{r}}^{T} {\mathbf{z}}_{\text{s}} ))} }}} \right. \kern-0pt} {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {({\text{u}}_{\text{r}}^{\text{p}} ({\mathbf{v}}_{\text{r}}^{T} {\mathbf{v}}_{\text{s}} )({\mathbf{z}}_{\text{r}}^{T} {\mathbf{z}}_{\text{s}} ))} }} \\ & = {{{\text{u}}_{\text{s}}^{\text{p}} (\varvec{v}_{\text{s}}^{T} A_{\text{p;;}} {\mathbf{z}}_{\text{s}} - \varepsilon )} \mathord{\left/ {\vphantom {{{\text{u}}_{\text{s}}^{\text{p}} (\varvec{v}_{\text{s}}^{T} A_{\text{p;;}} {\mathbf{z}}_{\text{s}} - \varepsilon )} {({\mathbf{U}}_{{{\text{p}};}} (({\mathbf{V}}^{T} {\mathbf{v}}_{\text{s}} ){ \odot }({\mathbf{Z}}^{T} {\mathbf{z}}_{\text{s}} )))}}} \right. \kern-0pt} {({\mathbf{U}}_{{{\text{p}};}} (({\mathbf{V}}^{T} {\mathbf{v}}_{\text{s}} ){ \odot }({\mathbf{Z}}^{T} {\mathbf{z}}_{\text{s}} )))}} \\ \end{aligned}$$
(43)

where \({\mathbf{U}}_{{{\text{p}};}} \in {\mathbb{R}}^{{1 \times {\text{R}}}}\) represents the pth row of \({\mathbf{U}} = [\varvec{u}_{ 1} ,\varvec{u}_{ 2} , \ldots ,\varvec{u}_{\text{R}} ]\),\(\odot\) is the matrix Hadamard product (e.g., \((X{ \odot }Y)_{\text{ij}} = X_{\text{ij}} Y_{\text{ij}}\)), \({\mathbf{A}}_{\text{p;;}} \in {\mathbb{R}}^{{{\text{m}}_{2} \times {\text{N}}}}\) represents the matrix which fixes the first mode of A, and traversals the other two modes. It is defined as:

$$({\mathbf{A}}_{\text{p;;}} )_{\text{qi}} = {\mathbf{A}}_{\text{pqi}} ,\text{ }\quad 1 \le \forall {\text{q}} \le {\text{m}}_{2} ,\quad 1 \le \forall {\text{i}} \le {\text{N}}$$
(44)

According to the analysis above, similarly, the update equation of the qth element of \(\varvec{v}_{\text{s}}\) (\({\text{v}}_{\text{s}}^{\text{q}}\), \(1 \le {\text{s}} \le {\text{R}}\), \(1 \le {\text{q}} \le {\text{m}}_{2}\)) can be written as:

$$\begin{aligned} {\text{v}}_{\text{s}}^{\text{q}} & = {{{\text{v}}_{\text{s}}^{\text{q}} \sum\nolimits_{{{\text{p}} = 1,{\text{i}} = 1}}^{{{\text{m}}_{1} , {\text{N}}}} { ( {\text{A}}_{\text{pqi}} {\text{u}}_{\text{s}}^{\text{p}} {\text{z}}_{\text{s}}^{\text{i}} - \gamma )} } \mathord{\left/ {\vphantom {{{\text{v}}_{\text{s}}^{\text{q}} \sum\nolimits_{{{\text{p}} = 1,{\text{i}} = 1}}^{{{\text{m}}_{1} , {\text{N}}}} { ( {\text{A}}_{\text{pqi}} {\text{u}}_{\text{s}}^{\text{p}} {\text{z}}_{\text{s}}^{\text{i}} - \gamma )} } {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left( {{\text{v}}_{\text{r}}^{\text{q}} \left\langle {{\mathbf{u}}_{\text{r}} \varvec{,}{\mathbf{u}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{z}}_{\text{r}} ,{\mathbf{z}}_{\text{s}} } \right\rangle } \right)} }}} \right. \kern-0pt} {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left( {{\text{v}}_{\text{r}}^{\text{q}} \left\langle {{\mathbf{u}}_{\text{r}} \varvec{,}{\mathbf{u}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{z}}_{\text{r}} ,{\mathbf{z}}_{\text{s}} } \right\rangle } \right)} }} \\ & = {{{\text{v}}_{\text{s}}^{\text{q}} ({\mathbf{u}}_{\text{s}}^{T} A_{{ ; {\text{q;}}}} {\mathbf{z}}_{\text{s}} - \gamma )} \mathord{\left/ {\vphantom {{{\text{v}}_{\text{s}}^{\text{q}} ({\mathbf{u}}_{\text{s}}^{T} A_{{ ; {\text{q;}}}} {\mathbf{z}}_{\text{s}} - \gamma )} {({\mathbf{V}}_{{{\text{q}};}} (({\mathbf{U}}^{T} {\mathbf{u}}_{\text{s}} ){ \odot }({\mathbf{Z}}^{T} {\mathbf{z}}_{\text{s}} )))}}} \right. \kern-0pt} {({\mathbf{V}}_{{{\text{q}};}} (({\mathbf{U}}^{T} {\mathbf{u}}_{\text{s}} ){ \odot }({\mathbf{Z}}^{T} {\mathbf{z}}_{\text{s}} )))}} \\ \end{aligned}$$
(45)

where \({\mathbf{V}}_{{{\text{q}};}} \in {\mathbb{R}}^{{1 \times {\text{R}}}}\) represents the pth row of \({\mathbf{V}} = [\varvec{v}_{ 1} ,\varvec{v}_{ 2} , \ldots ,\varvec{v}_{\text{R}} ] \in {\mathbb{R}}^{{{\text{m}}_{2} \times {\text{R}}}}\), \(\odot\) is the matrix Hadamard product (e.g., \((X{ \odot }Y)_{\text{ij}} = X_{\text{ij}} Y_{\text{ij}}\)), \({\mathbf{A}}_{{ ; {\text{q;}}}} \in {\mathbb{R}}^{{{\text{m}}_{ 1} \times {\text{N}}}}\) represents the matrix which fixes the second mode of A, and traversals the other two modes. It is defined as:

$$({\mathbf{A}}_{{ ; {\text{q;}}}} )_{\text{pi}} = {\mathbf{A}}_{\text{pqi}} ,\text{ }1 \le \forall {\text{p}} \le {\text{m}}_{1} ,1 \le \forall {\text{i}} \le {\text{N}}$$
(46)

Now, \(\varvec{v}_{\text{r}}\) and \(\varvec{u}_{\text{r}}\) are calculated.

Then, we discuss the calculation of \(\nabla f_{{{\mathbf{U,V}}}} ({\mathbf{Z}}^{(t)} )\). The differential of f along \({\mathbf{z}}_{\text{s}} (\forall {\text{s}},1 \le {\text{s}} \le {\text{R}})\) is

$$\begin{aligned} d({\text{f}}_{{\varvec{z}_{\text{s}} }} ) & = \left\langle {{\mathbf{A}} - \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {{\mathbf{u}}_{\text{r}} \otimes {\mathbf{v}}_{\text{r}} \otimes {\mathbf{z}}_{\text{r}} } , - d(\varvec{z}_{\text{s}} ) \otimes {\mathbf{u}}_{\text{s}} \otimes {\mathbf{v}}_{\text{s}} } \right\rangle + \frac{1}{2}d\left( {\lambda {\mathbf{z}}_{\text{s}}^{T} L{\mathbf{z}}_{\text{s}} } \right) - \frac{1}{2}d\left( {\sigma {\mathbf{z}}_{\text{s}}^{T} L^{p} {\mathbf{z}}_{\text{s}} } \right) \\ & = \left\langle {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {{\mathbf{u}}_{\text{r}} \otimes {\mathbf{v}}_{\text{r}} \otimes \varvec{z}_{\text{r}} } ,d({\mathbf{z}}_{\text{s}} ) \otimes {\mathbf{u}}_{\text{s}} \otimes {\mathbf{v}}_{\text{s}} } \right\rangle - \left\langle {{\mathbf{A}},d({\mathbf{z}}_{\text{s}} ) \otimes {\mathbf{u}}_{\text{s}} \otimes {\mathbf{v}}_{\text{s}} } \right\rangle + d\left( {\frac{1}{2}\lambda {\mathbf{z}}_{\text{s}}^{T} (D - S){\mathbf{z}}_{\text{s}} } \right) - d\left( {\frac{1}{2}\sigma {\mathbf{z}}_{\text{s}}^{T} (D^{p} - S^{p} ){\mathbf{z}}_{\text{s}} } \right) \\ \end{aligned}$$
(47)

For \({{\left( {\lambda {\mathbf{z}}_{\text{s}}^{T} (D - S){\mathbf{z}}_{\text{s}} } \right)} \mathord{\left/ {\vphantom {{\left( {\lambda {\mathbf{z}}_{\text{s}}^{T} (D - S){\mathbf{z}}_{\text{s}} } \right)} 2}} \right. \kern-0pt} 2}\), the partial differential for \(\varvec{z}_{\text{s}}^{\text{i}}\) is

$$\begin{aligned} \frac{{\partial \left( {\frac{1}{2}\lambda {\mathbf{z}}_{\text{s}}^{T} (D - S){\mathbf{z}}_{\text{s}} } \right)}}{{\partial {\text{z}}_{\text{s}}^{\text{i}} }} & = \frac{{\partial ({\mathbf{z}}_{\text{s}} )}}{{\partial {\text{z}}_{\text{s}}^{\text{i}} }}\frac{{\partial \left( {\frac{1}{2}\lambda {\mathbf{z}}_{\text{s}}^{T} (D - S){\mathbf{z}}_{\text{s}} } \right)}}{{\partial {\mathbf{z}}_{\text{s}} }} \\ & = (\varvec{e}^{\text{i}} )^{T} \lambda (D - S){\mathbf{z}}_{\text{s}} \\ \end{aligned}$$
(48)

where the i h element in \(\varvec{e}^{\text{i}} \in {\mathbb{R}}^{\text{N}}\) is 1, others are 0 s. That is \((\varvec{e}^{\text{i}} )_{\text{i}} = 1\) and \((\varvec{e}^{\text{i}} )_{{{\text{k}} \ne {\text{i}}}} = 0\). Then, the partial differential for \({\text{z}}_{\text{s}}^{\text{i}}\) is:

$$\begin{aligned} \frac{{\partial {\text{f}}}}{{\partial {\text{z}}_{\text{s}}^{\text{i}} }} & = \left\langle {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {{\mathbf{u}}_{\text{r}} \otimes {\mathbf{v}}_{\text{r}} \otimes {\mathbf{z}}_{\text{r}} } ,{\mathbf{u}}_{\text{s}} \otimes {\mathbf{v}}_{\text{s}} \otimes \varvec{e}^{\text{i}} } \right\rangle - \left\langle {{\mathbf{A}},{\mathbf{u}}_{\text{s}} \otimes {\mathbf{v}}_{\text{s}} \otimes \varvec{e}^{\text{i}} } \right\rangle + \lambda (\varvec{e}^{\text{i}} )^{T} (D - S){\mathbf{z}}_{\text{s}} - \sigma (\varvec{e}^{\text{i}} )^{T} (D^{p} - S^{p} ){\mathbf{z}}_{\text{s}} \\ & = \sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left\langle {{\mathbf{u}}_{\text{r}} ,{\mathbf{u}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle {\text{z}}_{\text{r}}^{\text{i}} } - \sum\nolimits_{{{\text{p}} = 1,{\text{q}} = 1}}^{{{\text{m}}_{1} , {\text{m}}_{2} }} {{\text{A}}_{\text{pqi}} {\text{u}}_{\text{s}}^{\text{p}} {\text{v}}_{\text{s}}^{\text{q}} } + (\lambda D_{\text{ii}} {\text{z}}_{\text{s}}^{\text{i}} - \lambda \sum\nolimits_{{{\text{k}} = 1}}^{\text{N}} {S_{\text{ik}} {\text{z}}_{\text{s}}^{\text{k}} } ) - (\sigma D_{\text{ii}}^{p} {\text{z}}_{\text{s}}^{\text{i}} - \sigma \sum\nolimits_{{{\text{k}} = 1}}^{\text{N}} {S_{\text{ik}}^{p} {\text{z}}_{\text{s}}^{\text{k}} } ) \\ \end{aligned}$$
(49)

According to Eq. (25), the update rule for \({\text{z}}_{\text{s}}^{\text{i}}\) is

$$\begin{aligned} {\text{z}}_{\text{s}}^{\text{i}} & = {\text{z}}_{\text{s}}^{\text{i}} - \mu ({\text{z}}_{\text{s}}^{\text{i}} )\frac{{\partial {\text{f}}}}{{\partial {\text{z}}_{\text{s}}^{\text{i}} }} \\ & = {\text{z}}_{\text{s}}^{\text{i}} - \mu ({\text{z}}_{\text{s}}^{\text{i}} )\left( {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left\langle {{\mathbf{u}}_{\text{r}} ,{\mathbf{u}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle {\text{z}}_{\text{r}}^{\text{i}} } + \lambda D_{\text{ii}} {\text{z}}_{\text{s}}^{\text{i}} + \sigma \sum\nolimits_{{{\text{k}} = 1}}^{\text{N}} {S_{\text{ik}}^{p} {\text{z}}_{\text{s}}^{\text{k}} } - \sum\nolimits_{{{\text{p}} = 1,{\text{q}} = 1}}^{{{\text{m}}_{1} , {\text{m}}_{2} }} {{\text{A}}_{\text{pqi}} {\text{u}}_{\text{s}}^{\text{p}} {\text{v}}_{\text{s}}^{\text{q}} } - \lambda \sum\nolimits_{{{\text{k}} = 1}}^{\text{N}} {S_{\text{ik}} {\text{z}}_{\text{s}}^{\text{k}} } - \sigma D_{\text{ii}}^{p} {\text{z}}_{\text{s}}^{\text{i}} } \right) \\ \end{aligned}$$
(50)

To ensure the non-negative, the update step \(\mu ({\text{z}}_{\text{s}}^{\text{i}} )\) is set as:

$$\mu ({\text{z}}_{\text{s}}^{\text{i}} ) = {{{\text{z}}_{\text{s}}^{\text{i}} } \mathord{\left/ {\vphantom {{{\text{z}}_{\text{s}}^{\text{i}} } {\left( {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left\langle {{\mathbf{u}}_{\text{r}} ,{\mathbf{u}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle {\text{z}}_{\text{r}}^{\text{i}} } + \lambda D_{\text{ii}} {\text{z}}_{\text{s}}^{\text{i}} + \sigma \sum\nolimits_{{{\text{k}} = 1}}^{\text{N}} {S_{\text{ik}}^{p} {\text{z}}_{\text{s}}^{\text{k}} } } \right)}}} \right. \kern-0pt} {\left( {\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left\langle {{\mathbf{u}}_{\text{r}} ,{\mathbf{u}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle {\text{z}}_{\text{r}}^{\text{i}} } + \lambda D_{\text{ii}} {\text{z}}_{\text{s}}^{\text{i}} + \sigma \sum\nolimits_{{{\text{k}} = 1}}^{\text{N}} {S_{\text{ik}}^{p} {\text{z}}_{\text{s}}^{\text{k}} } } \right)}}$$
(51)

And the final update equation of \({\text{z}}_{\text{s}}^{\text{i}}\) is

$$\begin{aligned} {\text{z}}_{\text{s}}^{\text{i}} & = {\text{z}}_{\text{s}}^{\text{i}} \frac{{\sum\nolimits_{{{\text{p}} = 1,{\text{q}} = 1}}^{{{\text{m}}_{1} , {\text{m}}_{2} }} {{\text{A}}_{\text{pqi}} {\text{u}}_{\text{s}}^{\text{p}} {\text{v}}_{\text{s}}^{\text{q}} } + \lambda \sum\nolimits_{{{\text{k}} = 1}}^{\text{N}} {S_{\text{ik}} {\text{z}}_{\text{s}}^{\text{k}} } + \sigma D_{\text{ii}}^{p} {\text{z}}_{\text{s}}^{\text{i}} }}{{\sum\nolimits_{{{\text{r}} = 1}}^{\text{R}} {\left\langle {{\mathbf{u}}_{\text{r}} ,{\mathbf{u}}_{\text{s}} } \right\rangle \left\langle {{\mathbf{v}}_{\text{r}} ,{\mathbf{v}}_{\text{s}} } \right\rangle {\text{z}}_{\text{r}}^{\text{i}} } + \lambda D_{\text{ii}} {\text{z}}_{\text{s}}^{\text{i}} + \sigma \sum\nolimits_{{{\text{k}} = 1}}^{\text{N}} {S_{\text{ik}}^{p} {\text{z}}_{\text{s}}^{\text{k}} } }} \\ & = {\text{z}}_{\text{s}}^{\text{i}} \frac{{{\mathbf{u}}_{\text{s}}^{T} A_{{ ; ; {\text{i}}}} {\mathbf{v}}_{\text{s}} + \lambda S_{\text{i;}} \varvec{z}_{\text{s}} + \sigma D_{\text{ii}}^{p} {\text{z}}_{\text{s}}^{\text{i}} }}{{{\mathbf{Z}}_{{{\text{i}};}} \left( {\left( {{\mathbf{U}}^{T} {\mathbf{u}}_{\text{s}} } \right){ \odot }\left( {{\mathbf{V}}^{T} {\mathbf{v}}_{\text{s}} } \right)} \right) + \lambda D_{\text{ii}} {\text{z}}_{\text{s}}^{\text{i}} + \sigma \lambda S_{\text{i;}}^{p} \varvec{z}_{\text{s}} }} \\ \end{aligned}$$
(52)

where \({\mathbf{Z}}_{{{\text{i}};}} \in {\mathbb{R}}^{{1 \times {\text{R}}}}\) represents the i th row of \({\mathbf{Z}} = [{\mathbf{z}}_{ 1} ,{\mathbf{z}}_{ 2} , \ldots ,{\mathbf{z}}_{\text{R}} ]\);\(S_{\text{i;}} \in {\mathbb{R}}^{{1 \times {\text{N}}}}\) and \(S_{\text{i;}}^{p} \in {\mathbb{R}}^{{1 \times {\text{N}}}}\) represent the ith row of \(S\) and \(S^{p}\), respectively; \(\odot\) is Hadamard product,\({\mathbf{A}}_{{ ; ; {\text{i}}}} \in {\mathbb{R}}^{{{\text{m}}_{ 1} \times {\text{m}}_{ 2} }}\) and is defined as:

$$({\mathbf{A}}_{{ ; ; {\text{i}}}} )_{\text{pq}} = {\mathbf{A}}_{\text{pqi}} ,\text{ }1 \le \forall {\text{p}} \le {\text{m}}_{1} ,1 \le \forall {\text{q}} \le {\text{m}}_{2}$$
(53)

Now \(\varvec{u}_{\text{r}}\), \(\varvec{v}_{\text{r}}\) and \(\varvec{z}_{\text{r}}\) in the objective function are all solved. The illustrations about \(A_{\text{p;;}}\), \({\text{A}}_{{ ; {\text{q;}}}}\) and \(A_{{ ; ; {\text{i}}}}\) are given in Fig. 11.

Fig. 11
figure 11

The matrices corresponding to different transverses of a tensor

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

An, G., Liu, S. & Ruan, Q. A sparse neighborhood preserving non-negative tensor factorization algorithm for facial expression recognition. Pattern Anal Applic 20, 453–471 (2017). https://doi.org/10.1007/s10044-015-0507-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-015-0507-x

Keywords

Navigation