Deep Robust Encoder Through Locality Preserving LowRank Dictionary
 17 Citations
 12k Downloads
Abstract
Deep learning has attracted increasing attentions recently due to its appealing performance in various tasks. As a principal way of deep feature learning, deep autoencoder has been widely discussed in such problems as dimensionality reduction and model pretraining. Conventional autoencoder and its variants usually involve additive noises (e.g., Gaussian, masking) for training data to learn robust features, which, however, did not consider the already corrupted data. In this paper, we propose a novel Deep Robust Encoder (DRE) through locality preserving lowrank dictionary to extract robust and discriminative features from corrupted data, where a lowrank dictionary and a regularized deep autoencoder are jointly optimized. First, we propose a novel loss function in the output layer with a learned lowrank clean dictionary and corresponding weights with locality information, which ensures that the reconstruction is noise free. Second, discriminant graph regularizers that preserve the local geometric structure for the data are developed to guide the deep feature learning in each encoding layer. Experimental results on several benchmarks including object and face images verify the effectiveness of our algorithm by comparing with the stateoftheart approaches.
Keywords
Autoencoder Lowrank dictionary Graph regularizer1 Introduction
Among different deep structures, autoencoder (AE) [8] has been treated as robust feature extractors or pretraining scheme in various tasks [9, 10, 11, 12, 13, 14]. Conventional AE was proposed to encourage similar or identical inputoutput pairs where the reconstruction loss is minimized after decoding [8]. Followup work with various additive noises in the input layer is able to progressively purify the data, which fulfills the purpose “denoising” against unknown corruptions in the testing data [15]. These works as well as the most recent AE variants, e.g., multiview AE [13] and bishift AE [11], all assume the training data are clean, but can be intentionally corrupted. In fact, realworld data subject to corruptions such as changing illuminations, pose variations, or selfcorruption do not meet the assumption above. Therefore, learning deep features from realworld corrupted data instead of intentionally corrupted data with additive noises becomes critical to build robust feature extractor that is generalized well to corrupted testing data. To the best of our knowledge, such AE based deep learning scheme has not been discussed before.
Recently, lowrank matrix constraint has been proposed to learn robust features from corrupted data. Specifically, when data are lying in a single subspace, robust PCA (RPCA) [16] could well recover the corrupted data by seeking a lowrank basis. While lowrank representation (LRR) [17] is designed to recover corrupted data and rule out noises in case of multiple subspaces. Due to these technical merits, lowrank modeling has already been successfully used in different scenarios, e.g., multiview learning [18], transfer learning [19, 20, 21], and dictionary learning [22]. However, fewer works link the lowrank modeling to deep learning framework for robust feature learning.

A lowrank dictionary and deep AE are jointly optimized based on the corrupted data, which can progressively denoise the already corrupted features in the hidden layers so that robust deep AE could be achieved for corrupted testing data.

The newly designed loss function, which is based on the clean lowrank dictionary and preserved locality information in the output layer, penalizes the corruptions or distortions, meanwhile ensures that the reconstruction is noise free.

Graph regularizers are developed to guide feature learning in each encoding layer to preserve more geometric structures within the data, in either unsupervised or supervised fashions.
The remaining sections of this paper are organized as follows. In Sect. 2, we present a brief discussion of the related works. Then we propose our novel deep robust encoder in Sect. 3, as well as the solution. Experimental evaluations are reported in Sect. 4, followed by the conclusion in Sect. 5.
2 Related Work
In this section, we mainly discuss the recent related works and highlight the differences between their approaches and ours.
Autoencoder (AE) has attracted lots of research interests in computer vision fields. It was recently proposed as an efficient scheme for deep structures pretraining and dimensionality reduction [5, 8]. Denoising autoencoder (DAE) generated a robust feature extractor by incorporating artificially random noise to the input data, and then minimized the square loss between reconstructed output and original clean data [15]. Most recently, appealing AE variants have been proposed to handle different learning tasks, e.g., transfer learning [11], domain generalization [12] and multiview learning [13]. Generally, these variants aim to adapt the knowledge from one domain/view to another by tuning the input or the target data. Different from them, we consider that the realworld data already have been corrupted somehow and we develop an active deep denoising framework to handle the existing corruptions in the training data, which can then be well generalized to the unseen corrupted testing data. However, to the best knowledge, little has been discussed with regard to AE.
Lowrank modeling has demonstrated with appealing performance on robust feature extraction against noisy data. Recently, Robust PCA (RPCA) [16] has been proposed to rule out noises for data lying in a single subspace. Moreover, lowrank representation (LRR) [17] is presented recently to handle realworld noisy data lying in multiple subspaces. It can identify the global subspace structure as well as corruptions. Besides, lowrank modeling has also been adopted in different learning tasks, e.g., generic feature extraction [18], visual domain adaptation [21], robust transfer learning [19], and dictionary learning [22]. In this paper, we also involve the lowrank constraint on the dictionary learning to build a clean and compact basis. Differently, we exploit the lowrank dictionary to reconstruct the outputs of the deep AE with corrupted inputs, instead of the original data [22]. In this way, we could build an active deep denoising framework to generate more robust features from corrupted data. Furthermore, localitypreserving reconstruction helps maintain the geometric structure of the data, which has not been discussed with lowrank dictionary in deep learning before.
3 The Proposed Algorithm
In this section, we first introduce our motivation, and then propose our deep robust encoder through locality preserving lowrank dictionary. Finally, we present an efficient solution to the proposed framework.
3.1 Motivation
Intentional corruptions, e.g., random noises are added artificially while realworld ones are from data itself, e.g., varied lightings or occlusion. Most existing AE and its variants, e.g., DAE, take advantage of different additive noises on the clean data to improve the robustness of deep models. During the deep encoding/decoding process, the perturbed input data are gradually recovered. In this way, the learned deep model is able to tolerate certain corruptions simulated by the additive noises.
3.2 Locality Preserving LowRank Dictionary Learning
3.3 Deep Architecture
Considering the learning objective in Eq. (5) as a basic building block, we can train a more discriminant deep model. Existing popular training schemes for deep autoencoder includes Stacked AutoEncoder (SAE) [15] and Deep AutoEncoder [6]. However, as our learning objective/building block is different from theirs, we have a different training scheme for the deep structure.
3.4 Optimization
Equation (7) is difficult to address because of the nonconvexity and nonlinearity of the building block formulated in Eq. (5). To this end, we develop an alternating solution to iteratively update the encoding& decoding functions \(f_l (1\le l \le 2L)\) and dictionary D. First we list the lowrank dictionary learning, then provide the regularized deep autoencoder optimization.
4 Experiments
In this section, we conduct experiments to systematically evaluate our algorithm. First, we present the details of datasets and experimental settings. Then we do selfevaluation on our algorithm and present the comparison results with several stateoftheart algorithms. Finally, we further testify several properties of the proposed algorithm, e.g., impacts of layer size, parameter analysis.
4.1 Datasets and Experimental Settings
Recognition results (\(\%\)) of 4 approaches on different setting of three datasets.
COIL100c  PIE1  PIE2  PIE1c  PIE2c  ALOIc  

AE  74.56 ± 0.38  83.58 ± 0.11  82.79 ± 0.13  74.95 ± 0.14  73.89±0.12  80.98 ± 0.98 
LAE  78.32 ± 0.46  85.87 ± 0.16  85.08 ± 0.14  77.82 ± 0.12  76.14 ± 1.45  82.84 ± 1.26 
L\(^2\)AEu  79.84 ± 0.64  86.98 ± 0.09  86.45 ± 0.11  79.23 ± 0.11  79.02 ± 0.12  83.42 ± 0.87 
L\(^2\)AEs  82.42 ± 0.72  87.67 ± 0.10  87.54 ± 0.12  80.14 ± 0.10  79.96 ± 0.11  86.27 ± 0.75 
ALOI dataset^{3} consists of 1000 object categories captured from different viewing angles. Specifically, each object has 72 equally spaced views. In this experiments, we select the first 300 objects by following the setting in [26], where the images are transformed to grayscale and resized to \(36\times 48\). Furthermore, 10 % pixel corruption is added to testify the robustness of different methods.
Note that previous algorithms, e.g., DAE [15], adopted the “corrupted” data with random noise as the input for training while using the “original” data for testing. However, we assume the data are “already corrupted” and we manage to detect and remove the noise. Thus, we adopt the “same” types of training and testing data without intentional corruptions. Notably, to challenge all comparisons, we introduce additional noises to the datasets that have already been corrupted by poor lighting or arbitrary views. Such practice can be found in previous work [22, 26].
4.2 Selfevaluation
In this section, we mainly testify if our lowrank dictionary D and locality preserving term \(Z = [z_1,\cdots ,z_n]\) would facilitate our robust feature learning. Specifically, we define the deep version of Eq.(2) as LAE (Autoencoder with lowrank dictionary) and deep version of Eq. (3) as L\(^2\)AE (Autoencoder with locality preserving lowrank dictionary). For L\(^2\)AE, we have two ways to learn Z, that is, we set \(k_1 = k_2 = 5\) for all cases in unsupervised fashion (L\(^2\)AEu), while we set \(k_1, k_2\) as the size of each class for supervised fashion (L\(^2\)AEs). A fourlayer scheme is applied for all the comparisons for simplicity. We adopt corrupted COIL100 and ALOI, while both original and corrupted images of CMUPIE to testify these algorithms with the baseline, conventional AE [8]. The comparison results are shown in Table 1, where COIL100c means the 10 % corrupted COIL using 100 objects, PIE1 and PIE2 denote the two views cases \(\{C02, C14\}, \{C02, C27\}\) with its 10 % corrupted versions PIE1c and PIE2c, respectively. ALOIc represents the 10 % corrupted data.
From the results, we could observe that LAE outperforms the conventional AE, that means jointly learning the lowrank dictionary could boost the deep feature learning of autoencoder. Furthermore, we witness that our robust AEs with locality preserving lowrank dictionary could achieve better performance than LAE and AE for both unsupervised and supervised settings. That is, locality preserving property could generate more discriminative features for classification.
4.3 Comparison Experiments
Recognition results (\(\%\)) of 9 algorithms on COIL100 in different evaluation sizes, from 20 to 100 objects, where C1 to C5 denote 20 objects to 100 objects, respectively. color denotes the best recognition rates. color denotes the second best.
Original images  

PCA  LDA  RPCA+LDA  DLRD  LatLRR  SRRS  DAE  OursI  OursII  
C1  86.42 ± 1.11  81.83 ± 2.03  83.26 ± 1.52  89.58 ± 1.04  88.98 ± 0.85  87.81 ± 1.43  90.65 ± 1.34  
C2  83.75 ± 1.12  77.08 ± 1.36  78.39 ± 1.15  85.18 ± 1.10  88.45 ± 0.64  84.77 ± 1.25  90.34 ± 1.12  
C3  81.01 ± 0.92  66.96 ± 1.52  68.93 ± 0.86  82.60 ± 1.06  86.36 ± 0.52  80.85 ± 0.65  87.69 ± 0.82  
C4  80.53 ± 0.78  59.34 ± 1.22  60.73 ± 0.68  81.10 ± 0.58  84.67 ± 0.79  79.75 ± 0.61  85.15 ± 0.60  
C5  82.75 ± 0.59  52.29 ± 0.30  56.44 ± 0.73  79.92 ± 0.93  82.64 ± 0.60  78.99 ± 0.48  84.21 ± 0.69 
Corrupted images with 10 % random noise  

PCA  LDA  RPCA+LDA  DLRD  LatLRR  SRRS  DAE  OursI  OursII  
C1  71.43 ± 1.12  47.77 ± 3.06  49.35 ± 1.55  82.96 ± 1.81  81.38 ± 1.25  86.45 ± 1.12  82.37 ± 1.37  
C2  70.22 ± 1.56  45.89 ± 1.12  53.26 ± 1.84  60.46 ± 0.79  81.93 ± 0.92  82.03 ± 1.31  81.13 ± 0.83  
C3  69.80 ± 0.65  36.42 ± 1.12  44.18 ± 2.65  49.88 ± 0.49  80.97 ± 0.45  82.05 ± 0.87  79.61 ± 1.02  
C4  67.84 ± 0.83  27.13 ± 0.95  29.92 ± 0.96  41.52 ± 0.71  77.15 ± 0.72  79.83 ± 0.62  76.23 ± 0.59  
C5  65.68 ± 0.76  16.79 ± 0.34  23.55 ± 0.46  73.82 ± 0.77  73.47 ± 0.62  74.95 ± 0.65  72.15 ± 0.60 
Recognition results (\(\%\)) on CMUPIE face database, where P1: {C02, C14}, P2: {C02, C27}, P3: {C14, C27}, P4: {C05, C07, C29}, P5: {C05, C14, C29, C34}, P6: {C02, C05, C14, C29, C31}. color denotes the best recognition rates. color denotes the second best.
Original images  

PCA  LDA  RPCA+LDA  LatLRR  SRRS  LRCS  DAE  OursI  OursII  
P1  69.03 ± 0.08  70.46 ± 0.05  74.39 ± 0.08  77.92 ± 0.03  78.27 ± 0.04  87.78 ± 0.02  85.65 ± 0.12  
P2  69.21 ± 0.08  71.32 ± 0.02  75.55 ± 0.12  76.24 ± 0.12  78.74 ± 0.23  86.67 ± 0.01  84.32 ± 0.09  
P3  68.52 ± 0.12  63.51 ± 0.75  75.29 ± 0.09  75.29 ± 0.07  77.45 ± 0.02  87.38 ± 0.19  84.53 ± 0.04  
P4  52.65 ± 0.04  56.53 ± 0.02  61.17 ± 0.12  69.74 ± 0.05  71.44 ± 0.03  71.87 ± 0.09  74.08 ± 0.07  
P5  34.94 ± 0.08  24.07 ± 0.25  38.66 ± 0.08  42.54 ± 0.12  38.86 ± 0.02  42.32 ± 0.07  44.42 ± 0.10  
P6  29.09 ± 0.01  7.06 ± 0.01  31.94 ± 0.12  35.33 ± 0.04  30.16 ± 0.02  36.17 ± 0.01  33.50 ± 0.05 
Corrupted images with 10 % random noise  

PCA  LDA  RPCA+LDA  LatLRR  SRRS  LRCS  DAE  OursI  OursII  
P1  64.87 ± 0.32  26.71 ± 0.20  73.07 ± 0.11  73.10 ± 0.07  72.27 ± 0.05  78.98 ± 0.03  77.14 ± 0.11  
P2  66.04 ± 0.08  23.19 ± 0.35  74.28 ± 0.12  73.24 ± 0.32  72.74 ± 0.18  78.67 ± 0.05  76.98 ± 0.06  
P3  65.21 ± 0.04  20.34 ± 0.75  73.92 ± 0.12  73.85 ± 0.12  71.45 ± 0.08  78.38 ± 0.26  77.32 ± 0.09  
P4  50.16 ± 0.04  46.72 ± 0.02  60.18 ± 0.14  58.94 ± 0.09  54.32 ± 0.03  65.84 ± 0.04  70.64 ± 0.08  
P5  31.74 ± 0.08  6.67 ± 0.25  37.65 ± 0.09  39.26 ± 0.12  32.34 ± 0.02  39.48 ± 0.03  40.32 ± 0.09  
P6  27.21 ± 0.01  4.06 ± 0.01  31.34 ± 0.06  32.07 ± 0.03  29.03 ± 0.02  32.57 ± 0.01  33.12 ± 0.09 
In COIL dataset, we can observe that lowrank modeling based methods also achieve very good results compared with DAE, although the latter adds additive noises to train robust deep models. This demonstrates the robustness of lowrank modeling against noisy data. In the CMUPIE dataset, DAE could achieve very similar performance to lowrank modeling based methods, in both supervised and unsupervised fashions. Similar results can be found from ALOI dataset. On CMUPIE dataset, our algorithm cannot significantly improve the performance. One reason is that the facial appearances under different views on CMUPIE dataset are very different. Considering additional illumination variations, this raises a very challenging feature learning problem on realworld dataset. However, our algorithm could still achieve promising performance, even better than a most recent multiview learning method, LRCS. This further verifies the robustness of our algorithm against noises from real world. Generally, our supervised model outperforms unsupervised one in almost all the cases. This demonstrates the importance of discriminative information in classification tasks.
4.4 Property Evaluation
In this section, we further evaluate several properties of our proposed algorithm, e.g., robustness to noise, parameter influence and layer size impact, to achieve a better understanding of the proposed model.
First of all, we evaluate the impacts of different corruption ratios to different algorithms. We evaluate 0 %, 10 %, 20 %, 30 %, 40 %, and 50 % corruptions with 20 objects on COIL dataset, and report results in Fig. 4(b), where our algorithm in two modes consistently outperforms other competitors. This demonstrates that our proposed algorithm can build a more robust feature extractor, especially for data with large corruption. Therefore, our algorithm could work efficiently in realworld applications with various noise.
Second, we conduct parameter analysis for our supervised model (OursII). Specifically, we evaluate the balance parameter \(\lambda \) and \(\alpha \) for the lowrank dictionary and the graph terms, respectively. For better illustration, we jointly evaluate two parameters on corrupted COIL dataset with all 100 objects. Parameter influence results are listed in Fig. 4(c). From the results, we can notice larger value of \(\alpha \) performs better especially when \(\lambda \) is small. Besides, we could see that small \(\lambda \) around \(10^{2}\) performs better. That is, the graph regularizer is more critical to our algorithm comparing to the lowrank constraint on dictionary. Without loss of generality, we set \(\alpha = 10^2\) and \(\lambda =10^{2}\) throughout our experiments.
Finally, we evaluate the impacts of layer size for OursII on corrupted COIL100 with different corruptions (10 %, 20 %, 30 %). From Fig. 4(d), we can notice that our algorithm generally achieves better performance when the layer size goes up. That is, discriminative information is hopefully recovered by our deep encoding procedure. In other words, features would be refined from coarse to fine in a multilayer fashion. However, we also observe that a much deeper structure would ruin the recognition performance. Therefore, in the experiments, we use a fourlayer structure to generate the evaluation features.
5 Conclusion
In this paper, we developed a novel Deep Robust Encoder framework guided by a locality preserving lowrank dictionary learning scheme. Specifically, we designed a lowrank dictionary to constrain the output of the deep autoencoder with corrupted input. In this way, the deep neural networks would generate more robust features by detecting noise from the corrupted data. Moreover, coefficient vectors \(z_i\) were maintained through the networks so that each output sample would be reconstructed by the most similar data samples in the dictionary with different weights. Furthermore, graph regularizers were developed to couple each layer’s encoding to preserve more geometric structure. In experiments, we achieved more effective features for classification and results on several benchmarks demonstrated our method’s superiority over other methods.
Footnotes
Notes
Acknowledgment
This research is supported in part by the NSF CNS award 1314484, ONR award N000141211028, ONR Young Investigator Award N000141410484, and U.S. Army Research Office Young Investigator Award W911NF1410218.
References
 1.Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: Decaf: a deep convolutional activation feature for generic visual recognition. In: International Conference on Machine Learning, pp. 647–655 (2014)Google Scholar
 2.Szegedy, C., Toshev, A., Erhan, D.: Deep neural networks for object detection. In: Neural Information Processing Systems, pp. 2553–2561 (2013)Google Scholar
 3.Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: Deepface: closing the gap to humanlevel performance in face verification. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1708. IEEE (2014)Google Scholar
 4.Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
 5.Bengio, Y.: Learning deep architectures for ai. Found. Trends Mach. Learn. 2(1), 1–127 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
 6.Le, Q.V., Ngiam, J., Coates, A., Lahiri, A., Prochnow, B., Ng, A.Y.: On optimization methods for deep learning. In: International Conference on Machine Learning, pp. 265–272 (2011)Google Scholar
 7.Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z.: Deeplysupervised nets. In: International Conference on Artificial Intelligence and Statistics, pp. 562–570 (2015)Google Scholar
 8.Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
 9.Hinton, G.E., Krizhevsky, A., Wang, S.D.: Transforming autoencoders. In: Honkela, T., Duch, W., Girolami, M., Kaski, S. (eds.) ICANN 2011. LNCS, vol. 6791, pp. 44–51. Springer, Heidelberg (2011). doi: 10.1007/9783642217357_6 CrossRefGoogle Scholar
 10.Droniou, A., Sigaud, O.: Gated autoencoders with tied input weights. In: International Conference on Machine Learning, pp. 154–162 (2013)Google Scholar
 11.Kan, M., Shan, S., Chen, X.: Bishifting autoencoder for unsupervised domain adaptation. In: IEEE International Conference on Computer Vision, pp. 3846–3854 (2015)Google Scholar
 12.Ghifary, M., Bastiaan Kleijn, W., Zhang, M., Balduzzi, D.: Domain generalization for object recognition with multitask autoencoders. In: IEEE International Conference on Computer Vision, pp. 2551–2559 (2015)Google Scholar
 13.Wang, W., Arora, R., Livescu, K., Bilmes, J.: On deep multiview representation learning. In: International Conference on Machine Learning, pp. 1083–1092 (2015)Google Scholar
 14.Xia, C., Qi, F., Shi, G.: Bottomup visual saliency estimation with deep autoencoderbased sparse reconstruction. IEEE Trans. Neural Netw. Learn. Syst. 27(6), 1227–1240 (2016)MathSciNetCrossRefGoogle Scholar
 15.Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)MathSciNetzbMATHGoogle Scholar
 16.Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y.: Robust principal component analysis: exact recovery of corrupted lowrank matrices via convex optimization. In: Neural Information Processing Systems, pp. 2080–2088 (2009)Google Scholar
 17.Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., Ma, Y.: Robust recovery of subspace structures by lowrank representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 171–184 (2013)CrossRefGoogle Scholar
 18.Ding, Z., Fu, Y.: Lowrank common subspace for multiview learning. In: IEEE International Conference on Data Mining, pp. 110–119. IEEE (2014)Google Scholar
 19.Shao, M., Kit, D., Fu, Y.: Generalized transfer subspace learning through lowrank constraint. Int. J. Comput. Vis. 109(1–2), 74–93 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
 20.Ding, Z., Shao, M., Fu, Y.: Deep lowrank coding for transfer learning. In: TwentyFourth International Joint Conference on Artificial Intelligence, pp. 3453–3459 (2015)Google Scholar
 21.Jhuo, I.H., Liu, D., Lee, D., Chang, S.F., et al.: Robust visual domain adaptation with lowrank reconstruction. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2168–2175. IEEE (2012)Google Scholar
 22.Ma, L., Wang, C., Xiao, B., Zhou, W.: Sparse representation for face recognition based on discriminative lowrank dictionary learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2586–2593. IEEE (2012)Google Scholar
 23.Lin, Z., Chen, M., Ma, Y.: The augmented lagrange multiplier method for exact recovery of corrupted lowrank matrices. arXiv preprint (2010). arXiv:1009.5055
 24.Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
 25.Liu, D.C., Nocedal, J.: On the limited memory bfgs method for large scale optimization. Math. Program. 45(1–3), 503–528 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
 26.Li, S., Fu, Y.: Learning robust and discriminative subspace with lowrank constraints. IEEE Trans. Neural Netw. Learn. Syst. PP(99), 1–13 (2015)Google Scholar
 27.Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991)CrossRefGoogle Scholar
 28.Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 711–720 (1997)CrossRefGoogle Scholar
 29.Liu, G., Yan, S.: Latent lowrank representation for subspace segmentation and feature extraction. In: IEEE International Conference on Computer Vision, pp. 1615–1622 (2011)Google Scholar