A novel grouped sparse representation for face recognition
 247 Downloads
 1 Citations
Abstract
Grouped sparse representation classification methods (GSRCMs) have been attracted much attention by scholars, especially in face recognition. However, pervious literatures of GSRCMs only fuse the scores from different groups to classification the test sample, not consider relationships of the groups. Moreover, in realworld application, many methods of face recognition cannot obtain satisfied recognition accuracies because of the variation of poses, illuminations and facial representations of face image. In order to overcome abovementioned bottlenecks, in this paper, we proposed a novel grouped fusionbased method in face recognition. The proposed method uses the axissymmetrical property of face to designs a framework and perform it on original training set to generate a kind of virtual samples. The virtual samples are able to reflect the possible change of face images. Meanwhile, to consider the relationship of different groups and strengthen the representation capability of test sample, the proposed method exploits a novel weighted fusion approach to classify the test sample. Experimental results on five face databases demonstrate that our method is reasonable and can obtain higher recognition rate than the other 11 stateoftheart methods.
Keywords
Grouped sparse representation classifications Axissymmetrical property Framework Virtual sample Weighted fusion approach1 Introduction
In all biometric techniques, face recognition is the most attractive technique. A lot of face recognition methods have been applied to identity authentication and security system [3, 4, 11, 21, 23]. However, face recognition still faces a lot of challenges such as the various lightings, facial expressions, poses and environments [10, 13, 14, 33, 35, 42, 54, 57, 65, 68]. In order to overcome these challenges, a lot of representationbased classification methods (RBCMs) [15, 29, 31, 37, 52, 53, 63, 64, 68] are proposed such as SRC [52], collaborative representation classification (CRC) [63], twophase test sample representation (TPTSR) [53], linear regression classification (LRC) [37], feature space representation method [61], an improvement to the nearest neighbor (INNC) classification [55], etc. SRC tries to represent the test sample by an optimal linear combination of the training samples. Then, the test sample is assigned to the class which has the minimum deviation. Since SRC uses the l_{1}regularization to obtain the coefficient vector, SRC is viewed as a l_{1} normbased representation method. Scholars also proposed l_{2} normbased representation methods such as CRC, LRC and TPTSR. The same as SRC, CRC uses the combination of all training samples to represent the test sample. But it computes the coefficient vector by l_{2}regularization. The recently proposed LRC is closely related with nearest intraclass space (NICS) method [32]. The major difference between LRC and CRC is that LRC tries to use an optimal linear combination of the training samples of each class to represent the test sample. In other words, it establishes a linear system for each class. TPTSR first eliminates the classes which are far from the test sample. Then, the training samples of the remainder classes are combined to represent the test sample.
Besides the abovementioned RBCMs, another way to improve the recognition rate is simultaneously using the original training sample and corresponding virtual samples [16, 34, 38, 40, 44, 46, 47] to recognize the test sample. In realworld application of face recognition, because of the limited training samples, face recognition methods often suffer the challenges of varying poses, illuminations and facial expressions of face image. To solve these problems, a lot of methods have been proposed. For example, Xu et al. use the symmetry of the face to generate the mirror samples [57]. Then, the mirror samples are integrated into original training set to recognize the test samples. Another method [58] of Xu et al. generates a kind of virtual samples by the combination of multiple descriptions or representations. Then, the original training samples and virtual samples are integrated to recognize the test sample. The formula of multiple representations is defined as J_{ij} = I_{ij} ⋅ (m − I_{ij}). In this case, m = 255, I_{ij} and J_{ij} stand for the intensity of the pixel at the i − th row and j − th col. of original training sample and virtual training sample matrix. Experimental results in [57, 58] demonstrate that the virtual samples are able to reflect the possible changes of poses, illuminations and facial expressions of face image. Although virtual samples are able to enlarge original training set, this might adversely affect the efficiency of the appointed algorithm. So how to improve the recognition rate with limited training samples is still a hot research topic.
To improve the recognition rate with limited training samples, scholars proposed the fusion theory. Fusion theory is mainly performed at three levels, i.e., score level, decision level and feature level. Although the decision level fusion is very easy to implement, the multisource information cannot be fully exploited. Feature level fusion is able to exploit the most information from the original data, but it need to overcome the challenges of the inconsistence of different data set. For the score level fusion [8, 20, 68], there are three kinds of score level fusions, i.e., transformationbased score fusion [30], classifierbased score fusion [26] and densitybased score fusion. The classifierbased score fusion directly combines the scores of different data source into form a feature vector. Then, the test sample is classified in teams of the feature vector. The transformationbased score fusion first transforms the scores of different data source into a common domain. Then, it classifies the test sample by the integration of the normalized scores. Densitybased score fusion is able to obtain higher recognition rate than transformationbased score fusion and classifierbased score fusion if we can accurately evaluate the scores densities. But in realworld applications, because of the complex of evaluation, densitybased score fusion is hard to use. Pervious proposed literatures [11, 12, 18, 49] reflect that if the score fusion methods were integrated into RBCMs, especially the GSRCMs [48, 56, 62], the test sample is more possibly to be assigned to the correct class.
However, pervious proposed GSRMs only consider the residuals from different kinds of groups, and not consider the relationship between group and group. Moreover, in practical application of face recognition, we only have very small number of training samples to use. So this motivates us to propose a novel GSRM to resolve above problems. In order to classify the test sample, the proposed GSRM first selects the training samples of its nearest classes to form the first group. Next, a framework is performed on the first group to generate a kind of virtual samples. All virtual samples are formed the second group. Then, SRC is performed on the two groups to generate two residuals for each class. Finally, two residuals of a class and their distance are fused as the ultimate residuals. The distance is defined as the difference between the reconstructed sample of the class from the first group and second group.
The proposed method has the following main contributions. Firstly, it exploits the idea that representing the test sample by its nearest neighbor classes. This is able to reduce the sideeffect of some low relative training samples. Secondly, by designing a framework of generating a kind of virtual sample, the challenges of the various poses, illuminates and facial expressions can be partly overcame. Thirdly, for the classification decision of the test sample, the proposed method not only takes into count two residuals of a class, but also considers a distance of the class since they all contain discriminant information. Fourthly, a novel weighted fusion approach is proposed to fuse two residuals and the distance of the class. The fusion approach takes the test sample into account and generates adaptive weights to different residuals. This is helpful to better classify the test sample.
In the rest of this paper, we mainly give a brief review to SRC and LRC in section 2. In section 3, we present the details of the proposed method. In the meanwhile, the rationalities of the proposed method are analyzed in section 4. In section 5, we conduct the experiments on five databases. The conclusion is showed in section 6.
2 Related works
SRC and LRC are two conventional representationbased methods. Many proposed representationbased classification methods are based on SRC and LRC. Assuming that there are c classes and each class has n training samples. The test sample is denoted by z. Moreover, we converted each sample matrix into a column vector.
2.1 SRC
Suppose that the samples of the ith class are denoted by x_{i1}, x_{i2}, …, x_{in}, where i = 1, 2, …, c. Let us combine all training samples to form a matrix X = [x_{11}, …, x_{1n}, …, x_{c1}, …x_{cn}]. According to SRC, we have
In this case, ξ is very small positive.
If the minimum residual is from the ith class, the test sample will be classified to the ith class.
The major difference between CRC and SRC is that CRC solves the Eq. (2) with l_{2}norm regularization.
2.2 LRC
The test sample will be assigned to the ith class if the minimum residual is from the ith class.
3 The proposed method
3.1 The description of framework
3.2 The description of novel weighted fusion approach
 Step 1:
\( {r}_i^1 \) and \( {r}_i^2 \) are normalized to the range of 0 to 1. In this case, \( {r}_i^1 \) and \( {r}_i^2 \) can be rewritten as
 Step 2:
\( {r}_1^1,{r}_2^1,\dots, {r}_c^1 \) are sorted in the order of ascending and the sorted results is recorded as \( {r_1^1}^{\prime}\le {r_2^1}^{\prime}\le \cdots \le {r_c^1}^{\prime } \). After that, \( {r}_1^2,{r}_2^2,\dots, {r}_c^2 \) are sorted in the order of ascending and the sorted result is recorded as \( {r_1^2}^{\prime}\le {r_2^2}^{\prime}\le \cdots \le {r_c^2}^{\prime } \). Let \( w=\left({r_2^1}^{\prime }{r_1^1}^{\prime}\right)+\left({r_2^2}^{\prime }{r_1^2}^{\prime}\right) \), \( {w}_1=\frac{\left({r_2^1}^{\prime }{r_1^1}^{\prime}\right)}{w} \), \( {w}_2=\frac{{r_2^2}^{\prime }{r_1^2}^{\prime }}{w} \). So, the ultimate residual of the ith class can be written as
3.3 The description of the grouped spare classification

First step: We first use the linear combination of the training samples of original training set to represent the test sample. According to SRC, each class of the training set obtains a residual. Next, all training samples from K classes with the first K smallest residuals are selected to form the first group, where K < c.

Second step: We perform the framework on the first group to generate the virtual samples. The virtual samples are integrated into the second group. Let us respectively combine all training samples from the first and second groups to form the matrixes G_{1} and G_{2}. Then SRC are performed on these two groups. In this case, we have

Third step: Let a_{e}, …, a_{f} and \( {a}_e^{\prime },\dots, {a}_f^{\prime } \) be the coefficients of the training samples. Next, the reconstructed samples of the ith class in two groups are defined as \( {\sum}_{t=e}^f{a}_t{x}_{it} \) and \( {\sum}_{t=e}^f{a}_t^{\prime }{x}_{it}^v \), where \( {x}_{it}^v \) is the virtual sample of x_{it}. Then, the distance of the ith class can be written as

Fourth step: Let us perform the proposed weighted fusion approach on two groups. According to the description of subsection 3.2, the ultimate residual of the ith class of K selected classes can be written as
4 Analysis of the proposed method
4.1 Intuitive rationalities of the proposed method
It reflects the difference between the reconstructed samples of the class in the first and second group. In addition, \( \Delta {z}_i^1 \), \( \Delta {z}_i^2 \) and dis_{i}form a triangle of the mdimensional space, i.e, subsection 3.3. \( \Delta {z}_i^1 \) and \( \Delta {z}_i^2 \) show the discriminant information between the test sample and ith class, so the distance dis_{i} simultaneously contains the discriminant information of the test sample in two groups. If the reconstructed sample of the class in the first group is similar to the one in the second group, the corresponding distance will be very small. In this case, the fusion two kinds of residuals and the distance of the class maybe obtain smaller residual. This helps improve the recognition accuracy of test sample. Fourthly, a novel weighted fusion approach is proposed to fuse two residuals and the distance of a class. The weighted fusion approach uses a simple way to generate the weights for different residuals of a class automatically. In other words, the weighted fusion approach takes each test sample into account and determines optimal weights for the test sample. This is able to consider the dissimilarity between the test sample and the class flexibly.
4.2 More analysis of the proposed method
4.2.1 Insight into the advantage of the framework of our method
The Euclidean distances between the test sample and reconstructed virtual sample of our method, reconstructed virtual sample of method [54]
No. of the subject  1  2  3  4  5 

Reconstructed virtual samples of the method [54]  0.2172  0.6450  0.2707  0.28540.8696  
Reconstructed virtual samples of our method  0.1763  0.4495  0.2390  0.27180.6520 
4.2.2 The advantage of the weighted fusion approach of our method
4.2.3 The comparison between our weighted fusion approach and the method [67]
4.2.4 Insight into the advantage of the distance of our method
5 Experimental result and discussion
In this section, the ORL, FERET, Georgia Tech and PIE face databases were used to conduct the experiment. In the meanwhile, SRC, CRC, LRC, coarse to fine K nearest neighbor classification (CFKNNC) [56], improvement to nearest neighbor classification (INNC), homotopy [7], primal augmented lagrangian method (PLAM) [60], the method [65], discriminative sparse representation method (DSRM) [59], blockdiagonal representation (BDLRR) [66] and the method [27] were used to compare with our method. Moreover, we set the parameter μ of our method as 0.01. And the number of iteration of Homotopy and PLAM is set to 10. For the parameter K, the size is various because the number of the classes of different datasets is different. In the middle of the experiments, we find that the improvement of recognition accuracy of our method is obvious if K approximately equals to onehalf or onethird of the number of the classes. Finally, for the computation formula of accuracy, assuming that the number of the test samples in correct classes is p_{r},then the formula is define as \( accuracy=\frac{p_r}{C\times n}\times 100\% \).
5.1 Experiments on ORL face database
THE ACCURACIES (%) ON THE ORL FACE DATABASE
No. of the training sample  1  2  3  4 

SRC  67.50  85.00  85.17  90.00 
LRC  67.50  79.06  81.79  85.00 
CRC  68.06  83.44  86.07  89.17 
INNC  68.06  78.75  78.21  82.50 
CFKNNC  69.47  82.19  80.00  82.52 
Homotopy  64.17  81.88  87.53  89.75 
PLAM  64.72  79.38  84.29  86.25 
The method [54]  66.11  75.62  78.57  77.50 
DSRM  73.06  85.31  88.93  92.33 
BDLRR  65.56  80.13  87.86  90.42 
The method [27]  65.83  81.25  83.21  84.17 
Our method  75.00  88.44  89.64  92.50 
5.2 Experiments on Georgia Tech face database
The accuracies (%) on Georgia Tech face database
No. of the training sample  1  2  3  4 

SRC  37.86  46.46  50.50  57.27 
LRC  34.00  47.80  53.83  57.47 
CRC  37.14  45.45  50.05  53.27 
INNC  33.14  39.85  40.83  43.82 
CFKNNC  38.71  47.69  53.67  54.73 
Homotopy  34.00  47.80  53.83  57.47 
PLAM  38.34  49.54  55.33  59.91 
The method [54]  32.29  40.92  41.83  43.27 
DSRM  37.14  46.15  49.00  55.09 
BDLRR  35.43  48.46  55.33  60.09 
The method [27]  33.41  47.54  48.50  53.45 
Our method  39.00  50.77  56.67  60.55 
5.3 Experiments on FERET face database
The Accuracies (%) on the feret face database
No. of the training sample  1  2  3  4 

SRC  50.25  64.80  60.00  53.50 
LRC  44.92  64.20  59.62  76.50 
CRC  44.33  58.40  44.37  55.33 
INNC  44.33  58.30  50.50  54.00 
CFKNNC  47.83  63.30  54.88  57.32 
Homotopy  25.75  33.75  35.36  41.67 
PLAM  22.17  30.90  37.58  45.85 
The method [54]  31.25  61.60  55.13  58.17 
DSRM  36.38  52.60  48.38  61.33 
BDLRR  25.58  49.80  62.00  72.50 
The method [27]  40.08  54.30  52.88  56.00 
Our method  51.25  67.20  60.75  77.37 
5.4 Experiments on the CMUPIE face database
The accuracies (%) on the cmupie face database
No. of the training sample  1  2  3  4 

SRC  17.92  50.22  52.14  79.51 
LRC  16.85  48.94  52.46  66.70 
CRC  18.17  50.47  53.07  80.03 
INNC  18.17  46.45  49.11  79.38 
CFKNNC  19.38  51.07  53.00  80.92 
Homotopy  34.24  49.58  51.70  79.93 
PLAM  23.16  50.13  53.23  77.24 
The method [54]  20.62  51.77  55.06  82.52 
DSRM  35.35  51.25  53.30  81.36 
BDLRR  30.27  46.25  50.58  82.17 
The method [27]  13.69  50.01  61.32  73.63 
Our method  20.53  52.03  63.42  83.35 
5.5 Experiments on the Libor face database
The accuracies (%) on the libor face database
No. of the training sample  1  2  3  4 

SRC  86.70  87.39  87.54  87.58 
LRC  89.75  91.25  91.45  91.82 
CRC  86.98  87.39  87.73  88.08 
INNC  87.50  88.85  89.67  90.25 
CFKNNC  82.15  82.33  84.75  86.89 
Homotopy  88.12  89.27  91.07  88.50 
PLAM  90.58  90.53  91.20  89.75 
The method [54]  87.22  88.30  88.20  88.57 
DSRM  91.74  92.25  93.08  96.68 
BDLRR  91.94  92.23  92.78  96.08 
The method [27]  86.88  88.12  88.00  88.28 
Our method  92.17  92.52  93.99  94.17 
5.6 Discussion and analysis
Above mentioned competing methods can be categorized into five groups. SRC, LRC and CRC are traditional sparse representationbased classification method. INNC and CFKNNC are based on nearest neighbor classification method (NNC) [12]. Different from traditional sparse representationbased classification method, Homotopy and PLAM exploit greedy strategy approximation to solve sparse representation problem. So, Homotopy and PLAM are regarded as iterative classification method. The method in [27, 54] belong to the grouped sparse representationbased classification method. And DSRM and BDLRR are the most advanced related method published in recent years.
Through the observation of above experiment results, in most cases, our method is able to obtain higher recognition accuracy than the other competing methods. In fact, our method can be viewed as an improved SRC. Compared with the mentioned three traditional spare representationbased classification methods and two improved NNCs, the improvement in recognition accuracy of our method is obvious. The maximum improvement is greater than 23%. Although Homotopy and PLAM may be able to obtain higher recognition accuracies than our method if the number of iteration is increased, the time complexity will be increased exponentially. The same as our method, the methods in [27, 54] both exploit the original training set to generate a kind of virtual samples. Then a grouped sparse strategy and weighted fusion approach are used to classify the test sample. Moreover, in the step of fusing the residuals of a class, the method [27] uses the inner product of residual vectors of a class to reflect the relations of different groups. However, the two methods are beat by our method. This reflects that our analysis in the fourth section is rational. For DSRM and BDLRR, when the number of training samples is small, i.e., less than 4, in most cases, the recognition accuracies of our method are higher than these two most advanced related methods. However, DSRM and BDLRR can beat our method when each class has large number training samples.
In recent years, deep learning theory [36, 45] has become a hot spot of research in face recognition. However, deep learning theory need a large number of training samples to train the model. If the number of training sample is very limited, deep learning theory may be over fitting. In this case, deep learning theory is not suitable for the database with a small number of samples. Different from deep learning, our method has a good performance on the small size database.
Our method uses the pixels of face image as the features to classify the test sample. However, the time complexity of our method will be increased accordingly if the size of face image is large. Moreover, a framework of our method exploits the axissymmetrical property of face to generate a kind of virtual samples. In this case, if the face image is notsymmetrical, the framework may generate a misshapen face image. Fortunately, pervious literatures [50, 51] show that image feature extraction can greatly reduce the dimensionality of data without reducing the recognition rate. So our method maybe obtains good recognition accuracy and lower time complexity if it combines with efficient feature extraction methods.
Meanwhile, pervious literatures [1, 2, 25] show that image feature can be used to steganography [24, 28] and digitalwatermarking [9]. In this case, maybe the integration of our method and appointed feature extraction method can be used in security certification [21, 22] or information hiding [5, 6].
6 Conclusion
In this paper, we proposed a novel group representationbased classification method. The proposed method first combines all training samples of the nearest neighbor classes of test sample to form the first group. Next, a framework is performed on the first group to generate a kind of virtual samples. The virtual samples are combined to form the second group. Then, the proposed method performs SRC on these two groups to obtain two residuals for the classes. Finally, a novel weighted fusion approach is used to fuse two residuals and the distance of a class to recognize the test samples. Two residuals include the discriminant information of the test sample in the first and second group, respectively. The distance simultaneously includes the discriminant information of the test sample in two groups. In the subsection of 4.2.1, we can see that the virtual samples of our method are able to reflect the possible change of poses, facial expressions and illuminations of face image well. Moreover, according to the figures of the subsection of 4.2.2 and 4.2.3, we conclude that our method has strong representation ability on classifying the test sample. Experimental results on ORL, GeorgiaTech, FERET, CMPIE and Libor face databases reflects above conclusion is reasonable.
Notes
Acknowledgments
This work is supported by the National Natural Science Foundation of China (No.61672333, 61402274, 61461025), China Postdoctoral Science Foundation Special project (No.2014 T70937), the Program of Key Science and Technology Innovation Team in Shaanxi Province (No.2014KTC18), the Key Science and Technology Program of Shaanxi Province, China (No. 2016GY081), the Fundamental Research Funds for the Central Universities (No.GK201603083), Interdisciplinary Incubation Project of Learning Science of Shaanxi Normal University, Experimental Technology Research Project of Shaanxi Normal University (No. SYJS201314, SYJS201330).
References
 1.Abdul A, Gutub A (2010) Pixel Indicator technique for RGB image steganography. Journal of Emerging Technologies in Web Intelligence (JETWI) 2(1):56–64Google Scholar
 2.AbuMarie W, Gutub A (2010) Hussein AbuMansour, image based steganography using truth table based and determinate Array on RGB Indicator. International Journal of Signal and Image Processing (IJSIP) 1(3):196–204Google Scholar
 3.Alassaf N, Alkazemi B, Gutub A (2017) Applicable lightweight cryptography to secure medical data in iot systems. Journal of Research in Engineering and Applied Sciences (JREAS) 2(2):50–58Google Scholar
 4.Alharthi N, Gutub A (2017) Data visualization to explore improving decisionmaking within hajj services. Scientific Modelling and Research 2(1):9–18CrossRefGoogle Scholar
 5.AlOtaibi N, Gutub A (2014) Flexible StegoSystem for Hiding Text in Images of Personal Computers Based on User Security Priority, Proceedings of 2014 International Conference on Advanced Engineering Technologies (AET2014), pp. 250–256Google Scholar
 6.AlOtaibi N, Gutub A (2014) 2Leyer security system for hiding sensitive text data on personal computers. Lecture Notes on Information Theory 2(2):151–157Google Scholar
 7.Asif MS, Romberg J (2012) Fast and accurate algorithms for reweighted l _{1}norm minimization. IEEE Trans Signal Process 61(23):5905–5916zbMATHCrossRefGoogle Scholar
 8.Baum M On two types of deviation from the matching law: bias and undermatching. Experimental Analysis of Behavior. https://doi.org/10.1901/jeab.1974.22231 CrossRefGoogle Scholar
 9.Chen B, Wornell GW (2001) Quantization index modulation: a class of provably good methods for digital watermarking and information embedding. IEEE Information Theory Society 47(4):1423–1443MathSciNetzbMATHCrossRefGoogle Scholar
 10.Chen W, Er M, Wu S (2016) Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain. IEEE Trans Syst Man Cybern B, Cybern 36(2):458–466CrossRefGoogle Scholar
 11.Chen B, Yang Z, Huang S, Du X, Cui Z, Bhimani J, Xie X, Mi N (2017) CyberPhysical System Enabled Nearby Traffic Flow Modelling for Autonomous VehiclesGoogle Scholar
 12.Cover TM, Hart PE (1967) Nearest neighbor pattern classification. IEEE Trans Inform Theory 13(1):21–27zbMATHCrossRefGoogle Scholar
 13.Ding M, Fan G (2015) Multilayer Joint GaitPose Manifolds for Human Gait Motion Modeling. IEEE Transactions on Cybernetics 45(11):2413–2424CrossRefGoogle Scholar
 14.Ding M, Fan G (2016) Articulated and generalized Gaussian kernel correlation for human pose estimation, in. IEEE Trans Image Process 25(2):776–789MathSciNetzbMATHCrossRefGoogle Scholar
 15.Du B, Wang Z, Zhang L, Zhang L, Liu W, Shen J, Tao D (2015) Exploring representativeness and Informativeness for active learning. IEEE Transactions on Cybernetics 47(1):14–26CrossRefGoogle Scholar
 16.Ekman P, Hager JC, Friesen WV (1981) The symmetry of emotional and deliberate facial actions. Psychophysiology 18(2):101–106CrossRefGoogle Scholar
 17.Georghiades AS, Belhumeur PN, Kriegman D (2001) From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose. IEEE Trans. Pattern Anal. Mach. Intell. 23(6):643–660.CrossRefGoogle Scholar
 18.Georghiades S, Belhumeur N, Kriegman D (2001) From few to lighting and pose. IEEE Trans Pattern Anal Mach Intell 23(6):643–660 many: Illumination cone models for face recognition under variableCrossRefGoogle Scholar
 19.Goel N, Bebis G, Nefian A (2005) Face recognition experiments with random projection. Proc SPIE 5779:426–437CrossRefGoogle Scholar
 20.Gong C, Tao D, Liu W, Liu L, Yang J (2017) Label propagation via teachingtolearn and learningtoteach. IEEE Transactions on Neural Networks & Learning Systems 28(6):1452–1465CrossRefGoogle Scholar
 21.Gutub A (2015) Exploratory Data Visualization for Smart Systems, Smart Cities 2015  3rd Annual Digital Grids and Smart Cities Workshop, Burj Rafal Hotel Kempinski, Riyadh, Saudi ArabiaGoogle Scholar
 22.Gutub A (2015) Social Media & its Impact on egovernance, ME Smart Cities 20154th Middle East Smart Cities Summit, Pullman Dubai Deira City Centre Hotel, Dubai, UAEGoogle Scholar
 23.Gutub A, Alharthi N (2016) Improving Hajj and Umrah Services Utilizing Exploratory Data Visualization Techniques, Hajj Forum 2016  the 16th scientific Hajj research Forum, Organized by the Custodian of the Two Holy Mosques Institute for Hajj Research, Umm AlQura University  King Abdulaziz Historical Hall, Makkah, Saudi Arabia, 24–25Google Scholar
 24.Gutub A, Ankeer M, AbuGhalioun M, Shaheen A, Alvi A (2008) Pixel Indicator high capacity Technique for RGB image Based Steganography, WoSPA 2008  5th IEEE International Workshop on Signal Processing and its Applications, University of Sharjah, U.A.E, PP. 18–20Google Scholar
 25.Gutub A, AlQahtani A, Tabakh A (2009) TripleA: Secure RGB Image Steganography Based on Randomization, AICCSA2009  The 7th ACS/IEEE International Conference on Computer Systems and Applications, pp. 400–403Google Scholar
 26.Jain A, Nandakumar K, Ross A (2005) Score normalization in multimodal biometric systems. Pattern Recogn 38(12):2270–2285CrossRefGoogle Scholar
 27.Ke J, Peng Y, Liu S, Wu J, Qiu G (2017) Sample partition and grouped sparse representation. J Mod Opt 64(21):2289–2297CrossRefGoogle Scholar
 28.Khan F, Gutub A (2007) Message Concealment Techniques using Image based Steganography, The 4th IEEE GCC Conference and Exhibition, Gulf International Convention Centre, Manamah, Bahrain, pp. 2007Google Scholar
 29.Lai Z, Wong W, Xu Y, Yang J, Tang J, Zhang D (2016) Approximate orthogonal sparse embedding for dimensionality reduction. IEEE Transactions on Neural Networks and Learning Systems 27(4):723–735MathSciNetCrossRefGoogle Scholar
 30.Lai Z, Xu Y, Yang J, Shen L, Zhang D Rotational invariant dimensionality reduction algorithms. IEEE Transactions on Cybernetics. https://doi.org/10.1109/TCYB.2016.2578642 CrossRefGoogle Scholar
 31.Liu T, Tao D (2016) Classification with noisy labels by importance reweighting. IEEE Trans Pattern Anal Mach Intell 38(3):447–461CrossRefGoogle Scholar
 32.Liu W, Wang Y, Li SZ, Tan T (2004) Nearest intraclass space classifier for face recognition, in. Proceedings of the International Conference on Pattern Recognition (ICPR) 4:495–498CrossRefGoogle Scholar
 33.Liu S, Peng Y, Ben X, Yang W, Qiu G (2016) A novel label learning algorithm for face recognition. Signal Process 124:141–146CrossRefGoogle Scholar
 34.Liu S, Zhang X, Peng Y, Cao H (2016) Virtual images inspired consolidate collaborative representation based classification method for face recognition. J Mod Opt 63(12):1181–1188CrossRefGoogle Scholar
 35.Liu Z, Qiu Y, Peng YP, Zhang J (2017) Quaternion based maximum margin criterion method for color face recognition. Neural Process Lett 45(3):913–923CrossRefGoogle Scholar
 36.Liu Z, Luo P, Wang X, Tang X. Deep Learning Face Attributes in the Wild, ICCV, pp. 1–9Google Scholar
 37.Naseem I, Togneri R, Bennamoun M (2010) Linear regression for face recognition. IEEE Trans Pattern Anal Mach Intell 32(11):2106–2112CrossRefGoogle Scholar
 38.Niyogi P, Girosi F, Poggio T (1998) Incorporating prior information in machine learning by creating virtual examples. Proc IEEE 86(11):2196–2209CrossRefGoogle Scholar
 39.Phillips P, Moon H, Rizvi S, Rauss J (2000) The FERET evaluation methodology for facerecognition algorithms. IEEE Trans Pattern Anal Mach Intell 22(10):1090–1104CrossRefGoogle Scholar
 40.Ryu Y, Oh S (2002) Simple hybrid classifier for face recognition with adaptively generated virtual data. Pattern Recogn Lett 23(7):833–841zbMATHCrossRefGoogle Scholar
 41.Samaria F, Harter A (1994) A, parameterization of a stochastic model for human face Identication. In Proceedings of 2nd IEEE Workshop Applications Computer Vision 557(4):138–142CrossRefGoogle Scholar
 42.Sharma A, Dubey A, Tripathi P, Kuma V (2010) Pose invariant virtual classifiers from single training image using novel hybrideigenfaces. Neurocomputing 73(10–12):1868–1880CrossRefGoogle Scholar
 43.Sim T, Baker S, Bsat M (2002) The CMU pose, illumination, and expression (PIE) database. In Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition 435(19):46–51Google Scholar
 44.Song Y, Kim Y, Chang U, Kwon H (2006) Face recognition robust to left/right shadows; facial symmetry. Pattern Recogn 39(8):1542–1545CrossRefGoogle Scholar
 45.Sun Y, Wang X, Tang X. Deep Learning Face Representation from Predicting 10,000 Classes, CVPR, pp. 1–8Google Scholar
 46.Tan X, Chen S, Zhou Z, Zhang F (2006) Face recognition from a single image per person: a survey. Pattern Recogn 39(9):1725–1745zbMATHCrossRefGoogle Scholar
 47.Thian N, Marcel S, Bengio S (2003) Improving face authentication using virtual sample, Proceeding of the IEEE International Conference on Acoustics, Speech, and Signal Processing. pp. 6–10Google Scholar
 48.Wang Q, Zhang X, Wu Y, Tang L, Zha Z (2017) Nonconvex weighted l _{p} minimization based group sparse representation framework for image Denoising. IEEE Signal Processing Letters 99. https://doi.org/10.1109/LSP.2017.2731791 CrossRefGoogle Scholar
 49.Wen X, Wen J (2016) Improved the minimum squared error algorithm for face recognition by integrating original face images and the mirror images. OptikInternational Journal for Light and Electron Optics 127(2):883–889CrossRefGoogle Scholar
 50.Wen J, Fang X, Cui J (2018) Robust sparse linear discriminant analysis, IEEE Transactions on Circuits and Systems for Video TechnologyGoogle Scholar
 51.Wen J, Xu Y, Li Z (2018) Z, interclass sparsity based discriminative least square regression. Neural Netw 102:36–47CrossRefGoogle Scholar
 52.Wright J, Yang A, Ganesh A, Sastry S, Yi M (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31:210–227CrossRefGoogle Scholar
 53.Xu Y, Zhang D, Yang J, Yang J (2011) A twophase test sample sparse representation method for use with face recognition. IEEE Transactions on Circuits and Systems for Video Technology 21(9):1255–1262MathSciNetCrossRefGoogle Scholar
 54.Xu Y, Zhu Z, Li Z, Liu G, Lu Y, Liu H (2013) Using the original and ‘symmetrical face’ training samples to perform representation based twostep face recognition. Pattern Recogn 46(4):151–1158CrossRefGoogle Scholar
 55.Xu Y, Zhu Q, Chen Y, Pan J (2013) An improvement to the nearest neighbor classifier and face recognition experiments. International Journal of Innovative Computing, Information and Control 9(2):543–554Google Scholar
 56.Xu Y, Zhu Q, Fan Z, Qiu M, Chen Y, Liu H (2013) Coarse to fine K nearest neighbor classifier. Pattern Recogn Lett 34:980–986CrossRefGoogle Scholar
 57.Xu Y, Li X, Yang J, Zhang D (2014) Integrate the original face image and its mirror image for face recognition. Neurocomputing 131:191–199CrossRefGoogle Scholar
 58.Xu Y, Zhang B, Zhong Z (2015) Multiple representations and sparse representation for image classification. Pattern Recogn Lett 68:9–14CrossRefGoogle Scholar
 59.Xu Y, Zhong Z, Yang J, You J, Zhang D, New Discriminative A (2017) Sparse representation method for robust face recognition via L2 regularization. IEEE Transactions on Neural Networks and Learning Systems 28(10):2233–2242MathSciNetCrossRefGoogle Scholar
 60.Yang AY, Zhou Z, Balasubramanian AG, Sastry SS, Ma Y (2013) Fast l _{1}minimization algorithms for robust face recognition. IEEE Trans Image Process 22(8):3234–3246CrossRefGoogle Scholar
 61.Yin J, Liu Z, Jin Z, Yang W (2012) Kernel sparse representation based classification. Neurocomputing 77(1):120–128CrossRefGoogle Scholar
 62.Yu H, Gao L, Li W, Du Q, Zhang B (2017) Locality sensitive discriminant analysis for group sparse representationbased hyperspectral imagery classification. IEEE Geosci Remote Sens Lett 14(8):1358–1362CrossRefGoogle Scholar
 63.Zhang L, Yang M, Feng X (2011) Sparse representation or collaborative representation: Which helps face recognition, International Conference on Computer Vision, pp. 471–478Google Scholar
 64.Zhang Z, Xu Y, Yang J, Li X, Zhang D (2015) A survey of sparse representation: algorithms and applications. IEEE Access 3:490–530CrossRefGoogle Scholar
 65.Zhang X, Peng Y, Liu S, Wu J, Ren P (2017) A supervised dimensionality reduction method based sparse representation for face recognition. J Mod Opt 64(8):799–806CrossRefGoogle Scholar
 66.Zhang Z, Xu Y, Shao L, Yang J (2017) Discriminative blockdiagonal representational learning for image recognition. IEEE Transactions on Neural Networks and Learning systems. https://doi.org/10.1109/TNNLS.2017.2712801 MathSciNetCrossRefGoogle Scholar
 67.Zhang G, Zou W, Zhang X, Hu X, Zhao Y (2017) Diversity and adaptive weighted fusion for face recognition. Digital Signal Processing 62:150–156CrossRefGoogle Scholar
 68.Zhang B, Xu Y, Yang J (2018) Adaptive weighted nonnegative lowrank representation. Pattern Recogn 81:326–340CrossRefGoogle Scholar
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.