Skip to main content

A novel facial image recognition method based on perceptual hash using quintet triple binary pattern

Abstract

Image classification (categorization) can be considered as one of the most breathtaking domains of contemporary research. Indeed, people cannot hide their faces and related lineaments since it is highly needed for daily communications. Therefore, face recognition is extensively used in biometric applications for security and personnel attendance control. In this study, a novel face recognition method based on perceptual hash is presented. The proposed perceptual hash is utilized for preprocessing and feature extraction phases. Discrete Wavelet Transform (DWT) and a novel graph based binary pattern, called quintet triple binary pattern (QTBP), are used. Meanwhile, the K-Nearest Neighbors (KNN) and Support Vector Machine (SVM) algorithms are employed for classification task. The proposed face recognition method is tested on five well-known face datasets: AT&T, Face94, CIE, AR and LFW. Our proposed method achieved 100.0% classification accuracy for the AT&T, Face94 and CIE datasets, 99.4% for AR dataset and 97.1% classification accuracy for the LFW dataset. The time cost of the proposed method is O(nlogn). The obtained results and comparisons distinctly indicate that our proposed has a very good classification capability with short execution time.

Introduction

Nowadays, diverse biometric methods have been utilized for authentication in security priority systems [59, 61] in which facial images classification/recognition is one of the mostly used of them [44] Indeed, image classification/categorization can be considered as one of influential tasks in the domain of machine learning [52,53,54, 68] and computer vision that has been broadly attracted vivid attention from researchers worldwide [3]. Image processing methods have been widely applied to the facial image for authentication in the literature [43]. The face recognition methods have been used in a wide range of subjects for instance military, social media, mobile platforms, urbanism [18, 34, 44, 50] To do so, different machine learning methods have been utilized for face recognition and the most important face recognition methods are listed as follows:

  • Deep learning-based face recognition [85]

  • Geometrical face recognition [73]

  • 3D face recognition [1]

  • Local pattern-based face recognition [12, 74, 82]

Local pattern-based face recognition methods perform recognition using facial salient features. Therefore, success rates of those methods are high in the literature [8]. Local pattern-based facial recognition methods create a special pattern and scan the texture image with this pattern. Then, a texture image is classified using these features obtained by scanning [37, 60]. Facial image recognition is one of the mostly used biometrics methods in the literature and proposing a hand-crafted and effective recognition method is one of the crucial problem for face recognition.

The face recognition methods have been widely used in biometry and web-based applications. Numerous mobile phones have been also used in face recognition. To achieve high successful results, deep learning methods have been used but these methods have high computational costs and need more data. Main aim of this research is to propose a lightweight, high accurate and effective face recognition method. The proposed method is inspired by deep learning networks and this method tries to solve high computational cost of the deep learning based face recognition methods. Therefore, the presented model uses lightweight and effective models together. Perceptual hash methods have generally been used in the information security, especially, image authentication. Main goal of the perceptual hashes is to extract salient features from images to representation. In this view, these methods can be utilized as a preprocessor and feature generator. Therefore, a novel perceptual hash method is proposed to perform preprocessing and feature extraction. The proposed perceptual hash and graph-based quintet-triple binary pattern-based method also uses different methods together for facial recognition successfully and it achieves high success rates in the 5 well-known facial image datasets. The proposed QTBP is an LBP like feature extractor. Thus, a local feature extractor based facial recognition method is presented and this method is perceptual hash-based face recognition. The main purpose of this work is to provide robust face recognition with high classification accuracy. The presented method contains feature extraction using perceptual hash and classification. In the proposed perceptual hash, a novel local structure is presented which is called as quintet-triple binary pattern (QTBP). QTBP is similar to LBP [47, 48] which can be considered as a feature extractor. The main goal of these descriptors is to generate global informative features by using local relationships. Graph theory [5, 10] is also used to define variable patterns to extract informative features. It is worth noting that QTBP uses two main basic shapes (pentagon and triangle) as a Hamilton graph [65] which can create the patterns using these shapes. QTBP denotes that valuable features can be generated by using the basic shapes. Moreover, both DWT and SVD techniques are used for salient feature extraction. Technical contributions of this paper are given in below.

  • A novel perceptual hash is presented in this study and the proposed perceptual hash is utilized for preprocessing and feature generation. To calculate numerical results of the proposed perceptual hash-based face recognition method, 5 face image databases are chosen. 10 widely used descriptors are also used for comparisons. The results show that the perceptual hash-based method has high facial image classification accuracy.

  • QTBP is a novel graph based local descriptor. The main aim of the proposed QTBP is to obtain informative or meaningful textural features from facial images. In the literature there are many Hamilton graph based LBP like descriptors and these descriptors are called as local graph structures (LGS). To extract discriminative and informative features, we presented a new LGS in this paper. This research aims to show strength of the a novel LGS and these LGSs can be presented by using basic shapes. Therefore, QTBP is presented.

  • Moreover, a lightweight face recognition method is proposed by using perceptual hashing to obtain high classification accuracy from small facial image datasets. The deep learning methods have high performance in the big datasets. The main aim of the proposed facial recognition method is to reach high classification accuracies. This method has resulted effectiveness and it is a good hand-crafted method. The results obtained have proved efficacy of the proposed facial recognition method.

The remaining of this work is organized as follows. Section 2 gives related works, databases are defined in the Section 3, the presented local feature generation function (QTBP) is presented in the Section 4, details of the proposed perceptual hash based face classification method is shown in Section 5, results are given and discussed in Section 6, Section 7 presents conclusions and future works.

Related work

Some of the existing methods applied to the face recognition domain are studied. Zhou et al. [84] proposed Huffman and Local Binary Pattern (LBP) based face categorization method. In this work, firstly interest points were detected. By using interest points, face images were detected, and the proposed LBP-Huffman based feature extraction method was applied on the face images. Ou et al. [49] presented a robust discriminative nonnegative based face recognition method. In order to classify face images, regression matrix and nonnegative matrix were used. Liang et al. [35] suggested a 3D face recognition method using half face matching. Lv et al. [40] presented a latent face image recognition method to classify 3D, sketch, low resolution and high-resolution face images. A Bayesian based method was used for classification. Wang et al. [77] proposed adaptive SVD and 2D DWT (2-dimension discrete wavelet transform) based skin detection and face recognition methods. Tang et al. [70] suggested a face recognition method using fractal codes. In their study, short execution time was achieved by using fractal codes and the authors claimed that the method can be used for real world application basically. Vazquez-Fernandez and Gonzalez-Jimenez [72] discussed the importance of facial recognition on the mobile platform. On the mobile platform, face recognition is generally used for authentication. However, authors mentioned that biometric data protection was important as face and fingerprint templates were often stolen. They indicated that the confidentiality of the biometric data should be provided. Jain et al. [23] used the hybrid deep neural network for facial expression recognition. They used CNN (Convolution Neural Network) and RNN (Restricted Neural Network) together. Rakshit et al. [58] presented the 6 local graph structures (LGS) for face recognition which were vertical, vertical-symmetric, zigzag horizontal, zigzag horizontal middle, zigzag vertical, zigzag vertical middle and logically extended LSGs. The KNN (K-nearest neighbor) classification method was tested on the 5 face databases. In another study, Zhou et al. [86] proposed a method based on LBP. Improved pairwise-constrained multiple metric learning was also used to enhance the method. Nearest neighborhood classifier was chosen at the classification stage. Ding et al. [12] proposed Dual Cross Pattern to extract textural face feature extraction with high classification accuracy. DCP uses 5 × 5 size of overlapping blocks for feature extraction and 512 dimensions of features are extracted using DCP. Vishwakarma and Dalal [74] presented a robust method for face recognition. Their method operates with the Discrete Cosine transform coefficients of the image. Six face datasets were used in the study. Chakraborty et al. [8] suggested local quadruple pattern for facial image retrieval and recognition. In this method, a local structure is proposed for textural feature extraction and LQPAT uses 4 × 4 size of overlapping blocks to extract 512 dimensions of features. Accordingly, Liu et al. [37] presented an extended LBP version to achieve high classification accuracy. Peng et al. [51] suggested a method based on LBP and ensemble learning for face recognition. The proposed method is performed for face presentation attack detection (using three different face datasets) and had a suitable time cost.

Databases

To obtain numerical results from the proposed facial recognition method, 5 well-known facial image databases are used. These are AT&T, Face94, CIE, AR and LFW. These databases are labeled face image databases [13, 51, 62, 78] in which the properties of them are described in the following.

At&t

AT&T dataset consists of 300 images (for 30 people). The face images are gray level and size of these is 92 × 112. Several sample images from AT&T dataset are demonstrated in Fig. 1 [62].

Fig. 1
figure1

Sample images of AT&T database

Face94

Face94 dataset contains 300 images of 30 people. The face images are colored in which the size of them is 180 × 200. Example of some random face images in Face94 dataset are demonstrated in Fig. 2 [32].

Fig. 2
figure2

Example images of Face94 database

CIE

Like Face94 dataset, the CIE dataset also consists of 300 images (for 30 people). The face images are colored with size of 2048 × 1536. Example of some random face images from CIE dataset are demonstrated in Fig. 3 [24, 63].

Fig. 3
figure3

Example images of CIE database

AR

AR dataset contains 310 face images of 31 people. These images are color and their size is 768 × 536. Example of some random face images for AR dataset are presented in Fig. 4 [41, 42].

Fig. 4
figure4

Example face images of AR database

Labeled face in wild (LFW)

Labeled face in wild dataset is novel and widely used dataset. Many methods have been applied on this database to obtain the numerical results. This dataset is a heterogeneous dataset and it contains 13,000 images of the 1680 people. Sample of some random face images of the LFW dataset are demonstrated in Fig. 5 [28].

Fig. 5
figure5

Example face images of LFW database

The proposed graph-based quintet-triple binary pattern

Textural descriptors are very effective methods for face classification. These methods extract valuable features from a face image. Therefore, textural image descriptors have been widely used for face recognition. However, one of the most important problems is to select effective patterns. For this problem, we used a graph-based pattern and this pattern uses two basic shapes which are triangle and pentagon. The main goal of the proposed graph-based local image descriptor is to generate valuable features from face images. This pattern in inspired to shape based (rate based) facial images recognition. Therefore, we presented a hypothesis. This hypothesis is to use basic shape rates as a pattern as below.

To find optimum pattern for face recognition is a non-polynomial (NP) problem. Therefore, the researchers can propose nature inspired pattern as shown in Fig. 6. In this view, we presented a nature inspired pattern to achieve high performance rate. One of the fundamental aim of this research is to show success of the nature inspired pattern for face recognition. This pattern consists of a pentagon and a triangle shapes. Therefore, the proposed pattern is called as quintet-triple binary pattern (QTBP). The main objective of QTBP is to obtain the distinctive features from images. 5 × 5 size of overlapping blocks are used to implement the proposed QTBP. Figure 6 shows a numerical example of the QTBP graph-based binary pattern. The propose theorem of the QTBP is given as follows. The recognition rates are widely used features for face and facial expression recognitions. However, calculating recognition rates is a very hard task. Also, the image descriptors are effective, simple and fast methods for face recognition. In the proposed QTBP approach, we concatenate these methods. A graph-based descriptor is presented, and this graph is similar to recognition rates features lines.

Fig. 6
figure6

The inspired shape to create pattern of the QTBP

Fig. 7
figure7

A numerical example of QTBP graph-based binary pattern

Eqs.110 describes mathematical background of the QTBP.

$$ {b}_1=S\left({p}_{k+3,l+2},{p}_{k+2,l}\right) $$
(1)
$$ {b}_2=S\left({p}_{k+2,l},{p}_{k,l+1}\right) $$
(2)
$$ {b}_3=S\left({p}_{k,l+1},{p}_{k,l+3}\right) $$
(3)
$$ {b}_4=S\left({p}_{k,l+3},{p}_{k+2,l+4}\right) $$
(4)
$$ {b}_5=S\left({p}_{k+2,l+4},{p}_{k+3,l+2}\right) $$
(5)
$$ {b}_6=S\left({p}_{k+3,l+2},{p}_{k+4,l}\right) $$
(6)
$$ {b}_7=S\left({p}_{k+4,l},{p}_{k+4,l+4}\right) $$
(7)
$$ {b}_8=S\left({p}_{k+4,l+4},{p}_{k+3,l+2}\right) $$
(8)
$$ S\left(a,b\right)=\left\{\begin{array}{c}1,a\ge b\\ {}0,a<b\end{array}\right. $$
(9)
$$ val=\sum \limits_{t=1}^8{b}_t\mathrm{x}{2}^{8-t} $$
(10)

where b = {b1, b2,…,b8} set stands for binary values, S(.,.) is signum function, p are pixel values and val is the decimal value.

An example about the proposed QTBP is shown in Fig. 8. As shown in Fig. 8, there are three main categories: (a) raw face image, (b) QTBP image and (c) histogram of the QTBP image.

Fig. 8
figure8

The proposed QTBP

The proposed QTBP is a special image descriptor for face recognition. Therefore, we used it in the proposed face recognition method as a descriptor with perceptual hash. We can obtain more distinctive features by using the QTBP. The proposed QTBP is a graph-based descriptor and it uses lines like recognition rates. Therefore, our theorem is that it is a suitable descriptor for facial recognition, and is clearly proved in the experimental results section. Procedure of the proposed QTBP is demonstrated in Algorithm 1.

figurea

The obtained results of the proposed perceptual hash are presented in the Section 5.

The proposed perceptual hash-based face recognition method

Perceptual hash [79] is one of the newest generation decomposition and feature extraction methods for images. It has widely been used in the multimedia information security applications. According to the literature, perceptual hashes generate the salient features for image authentication [81]. These salient features can be used in machine learning methods. Also, LBP like image descriptors are effective feature extractors for textural and facial image classification. Therefore, we use a perceptual hash with a novel image descriptor to generate meaningful features. The proposed perceptual hash is a block-based method and it contains both feature extraction and preprocessing. Here, the preprocessing step comprised of several steps including 3LSBs elimination, RGB to grayscale conversion, image resizing (bilinear interpolation) and median filtering. The proposed preprocessing phase extracts robust facial features extraction against filtering attacks. The 3-level 2D DWT [9] SVD [67] and QTBP are used in the feature extraction phase of the presented perceptual hash. The robust features against JPEG compression [30] are extracted using 3-level DWT approach. The secondary image is created using 4 × 4 sized SVD. SVD provides robustness against geometrical attacks and final features are obtained using the proposed QTBP. Briefly, the proposed perceptual hash extracts robust and salient features. The graphical outline of the proposed method is demonstrated in Fig. 9.

Fig. 9
figure9

Schematically description of the proposed perceptual hash-based feature generator

The steps of the proposed perceptual hash-based feature extraction are presented below.

  1. Step 1:

    Set 0 to 3LSBs of raw images. The LBPs attacks manipulate most of the methods for instance deep learning method. To provide robustness against these attacks, this step is applied. The mathematical notation of it is demonstrated in Eq. 11.

$$ LI=\left\lfloor \frac{RI}{8}\right\rfloor \mathrm{x}\ 8 $$
(11)

where LI is 3LSBs eliminated image and RI is raw image.

  1. Step 2:

    Convert RGB image to grayscale.

$$ gray= rgb2 gray(LI) $$
(12)

where rgb2gray(.) is an RGB to grayscale conversion function.

  1. Step 3:

    Resize image using bilinear interpolation. This step is used to provide robustness against resizing attack. The resizing attacks are very critical for artificial intelligence methods. In this work, a perceptual hash which is used for information security based facial image categorization method is proposed, and this method is a semantic authentication method. To ensure robustness, we apply resizing attack.

$$ gray= resize\left( gray,1024\ \mathrm{x}\ 1024\right) $$
(13)
$$ gray= resize\left( gray,512\ \mathrm{x}\ 512\right) $$
(14)

where resize(.) is image resizing function. Resize(.) function uses bilinear interpolation.

  1. Step 4:

    Apply 5 × 5 size of median filter onto gray level image. The block size of the proposed QTBP is 5 × 5, therefore, we use 5 × 5 size of median filter.

  2. Step 5:

    Apply 3 level 2D DWT. As we know from literature, one of the most effective discrete wavelet transforms is 3 level 2D DWT. To achieve robustness against JPEG compression attack, this step is used.

$$ \left[l{l}^1,l{h}^1,h{l}^1,h{h}^1\right]= DWT2(gray) $$
(15)
$$ \left[l{l}^2,l{h}^2,h{l}^2,h{h}^2\right]= DWT2\left(l{l}^1\right) $$
(16)
$$ \left[l{l}^3,l{h}^3,h{l}^3,h{h}^3\right]= DWT2\left(l{l}^2\right) $$
(17)
  1. Step 6:

    Divide 4 × 4 size of non-overlapping blocks to ll3.

  2. Step 7:

    Apply SVD to each block. SVD is commonly used method for image processing methods because it extracts invariant features. Therefore, we use SVD to achieve invariance. By using the SVD, singular matrix of each block is calculated. The singular matrix is a diagonal matrix and the maximum value of it is S1.

  3. Step 8:

    Store S1 value of each block and create secondary image. Most of the perceptual hash method uses SVD to generate invariant image and it is called as secondary image.

  4. Step 9:

    Apply QTBP onto secondary image and calculate QTBP image.

  5. Step 10:

    Extract histogram of the QTBP image and obtain feature. Transitions of the proposed perceptual hash are presented in Table 1. The salient features of the facial images are obtained using the proposed perceptual hash. The transitions of this hash function are presented in Table 1.

  6. Step 11:

    Classify extracted features using SVM or KNN with stratified 10-fold cross validation.

Table 1 Transitions of the proposed perceptual hash

We used the MATLAB classification learner toolbox for the experiments of this study. It contains various classifiers. The extracted features were tested on SVMs and KNNs and the best accuracy rates are calculated using Quadratic kernel SVM and City Block 1NN, it is a simplest version of the KNN, and K is selected as 1 in this classifier. Therefore, it is called as 1NN. It should be noted that in this study 10-fold cross validation is used to obtain classification accuracies. Briefly, preprocessing and feature extraction are performed using the presented perceptual hash. Quadratic kernel SVM and KNN classifiers are used in the classification phase.

Experimental results and discussions

In this paper, 5 well-known face databases were used in the experiments. To implement the proposed method and existing methods, MATLAB 2018a is utilized as programming platform.

Moreover, to obtain numerical results from this method accuracy (Acc) is used. Mathematical definition of Acc is given by Eq. 18 [71].

$$ Acc\left(\%\right)=\frac{\# True\ predicted\ images}{\# Total\ images}\mathrm{x}\ 100 $$
(18)

Stratified 10 folds cross-validation is used for classification and then the accuracies of the classifiers are listed in Table 2.

Table 2 Classification accuracy of the proposed method

Table 4 clearly shows that the proposed perceptual hash based facial image categorization method has a high recognition capability. To better understand performance of this method, some existing face recognition methods are compared with the proposed method. The comparison results for AT&T, Face94, CIE and AR databases are given in Table 3. To obtain comparisons, the widely used image descriptors for face recognition were programmed.

Table 3 Comparison of results for the selected databases

The proposed method has reached 100% recognition rate in the AT&T, Face94 and CIE databases. In the AR databases, the recognition rate has reached 99.4%. This high recognition rate has been achieved through the proposed perceptual hash. The recognition performance of the proposed method is superior to existing methods.

Also, the performance of the method was evaluated to state of art methods using the AR and LFW datasets. As indicated in Table 5, the proposed method resulted successfully for the small and homogenous face datasets. LFW is a big and heterogeneous dataset. By using 1NN with city block distance, 97.1% success rate is achieved for LFW. The results of AR and LFW datasets using state-of- the-art methods are presented in Tables 4 and 5, respectively.

Table 4 Comparison of the average recognition rates of the proposed method with other state of art descriptor-based methods by using the AR dataset
Table 5 The average classification accuracies of the proposed method and the other existing methods for the LFW dataset

As seen from Table 4, the proposed method has the best classification capability among all the descriptor-based methods.

In the Table 5, the performances of the proposed and other widely used descriptors are compared by using the LFW dataset. Also, other widely used methods and deep methods are listed in Table 5.

Table 5 illustrates that the proposed perceptual hash feature generation-based face classification method has a high classification performance and it can be considered as one of the best methods among the descriptors. However, FaceNet and Multi-Directional Local Gradient Descriptor (MDLGD) have higher accuracy rates than our method because FaceNet is a deep learning method and MDLGD also has a complex mathematical background. They also have a high time cost, but our method is a cognitive and lightweight method. The proposed perceptual hash-based face recognition method is also achieved superior results than MDLGD in the AR dataset.

The time cost calculation of this method is listed in Table 6. Big O notation is utilized to calculate complexity.

Table 6 Time-cost of the proposed perceptual hash-based method

As it can be seen from Table 6, the proposed method has low computational complexity. The proposed perceptual hash has also O(nlogn), where n is the size of the image, because it decreases the size of the image step by step.

The summaries of experimental outcomes obtained are listed as follows:

  • A novel perceptual hash-based face recognition method is presented in this study. The proposed method extracts very robust and informative features. The proposed method combines preprocessing and feature extraction phases.

  • We propose a novel descriptor in this paper and this descriptor is called as QTBP. The proposed descriptor is more suitable than LBP, because this method is proposed for faces but LBP was presented for textural image classification. Table 4 shows that the proposed method achieve 0.994 accuracy rate for AR dataset and LBP achieved 0.988 accuracy rate. Also, 0.832 and 0.971 classification accuracy rates were computed using LBP and QTBP based methods for LFW, respectively. These results proved success of the proposed QTBP. To classify LFW images, many deep methods have been used but we used conventional classifiers which are KNN and SVM.

  • In the classification phase, KNN and SVM classifiers are used. These classifiers are conventional. We use them to show discriminative effect of the extracted features.

  • Quadratic kernel is suitable for facial image classification and KNN (K = 1 and distance metric is city block) is the simplest classifier. Therefore, quadratic kernel SVM and KNN are chosen as classifiers.

  • Accuracy rate is achieved as 100.0% on the AT&T, Face 94, CIE datasets using SVM. 99.4% accuracy is yielded on the AR dataset using SVM.

  • SVM classifier exhibits better performance than KNN in this work.

  • The proposed method extracts universal features. This method is a simple and has high success rate.

  • The proposed method is tested on LFW dataset. This dataset is a heterogeneous and big dataset. According to Table 5, the proposed QTBP and perceptual hash-based method has higher classification rate. It has the best success rate among the descriptors. However, FaceNet is more successful than our method. However, our method is a lightweight method, does not use any exemplar, pyramid like or multilayer to extract features. The proposed method is applied to images without any face detection phase and the extracted features are classified using the simplest classifier only. It can be said that the proposed method is a purely cognitive method.

  • The proposed method has approximately 9% higher accuracy rate than the best of the other descriptor-based methods on LFW dataset (except MDLGD).

The main advantages of the proposed method are as follows:

  • This method has basic mathematical structure and it can be programmed basically.

  • The proposed method has a low time cost. Therefore, real time face recognition can be also constructed using the proposed perceptual hash-based method.

  • The positive effects of the perceptual hash are used directly in this method. This study clearly shows that perceptual hash methods are useful for face recognition.

  • The proposed method is simple.

  • The proposed method extracts the universal features. The success rates of the proposed method are compared with the 5 facial image datasets and the results obtained has proved this claim.

The only limitation of this study is the proposed method cannot achieve the higher classification rate than FaceNet and MDLGD. To solve this issue, we aim to apply various evolutionary algorithms (EAs) to optimize different parameters of the base classification algorithms used in this study [2, 6, 19, 46, 55, 56, 87].

Conclusion and future work

In this study, a novel perceptual hash-based facial recognition method is presented to propose a high accurate face classification method. Pre-processing and feature extraction steps were performed by using the proposed perceptual hash. The purpose of the proposed perceptual hash was to extract the robust and informative features of the faces among different image data sets. In addition, a new graph-based descriptor was defined to derive the textural features. Then, SVM and KNN classifiers were used in the classification phase. Therefore, the proposed methodology was applied to 5 well-known face databases and other state-of-the-art face classification methods were used to obtain comparisons. The experimental results clearly illustrated that the proposed method has very high face image recognition capability with low computational complexity. Furthermore, the results compared to the several methods available in the literature and it outperformed previous outcomes. Briefly the proposed method achieved 100%, 100%, 100%, 99.40% and 97.10% classification accuracies for the AT&T, Face94, CIE, AR and LFW, respectively.

In our future studies, novel lightweights deep learning methods can be proposed for face recognition. In addition, other descriptors and perceptual hashes can be used in the proposed novel face recognition methods. The descriptors have been also used in textural image recognition methods. Novel textural image classification, retrieval and recognition methods will be proposed using perceptual hashes and image descriptors together.

References

  1. 1.

    Abate AF, Nappi M, Riccio D, Sabatino G (2007) 2D and 3D face recognition: a survey. Pattern Recogn Lett 28:1885–1906. https://doi.org/10.1016/j.patrec.2006.12.018

    Article  Google Scholar 

  2. 2.

    Abdar M, Wijayaningrum VN, Hussain S, Alizadehsani R, Plawiak P, Acharya UR, Makarenkov V (2019) IAPSO-AIRS: a novel improved machine learning-based system for wart disease treatment. J Med Syst 43:220. https://doi.org/10.1007/s10916-019-1343-0

    Article  Google Scholar 

  3. 3.

    Abdullah MFA, Sayeed MS, Sonai Muthu K, Bashier HK, Azman A, Ibrahim SZ (2014) Face recognition with symmetric local graph structure (SLGS). Expert Syst Appl 41:6131–6137. https://doi.org/10.1016/j.eswa.2014.04.006

    Article  Google Scholar 

  4. 4.

    Abusham EEA, Bashir HK (2011). Face recognition using Local Graph Structure (LGS). Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics) 6762 LNCS:169–175. https://doi.org/10.1007/978-3-642-21605-3_19

  5. 5.

    Akbarian B, Erfanian A (2020). A framework for seizure detection using effective connectivity, graph theory, and multi-level modular network. Biomed signal process control 59:. https://doi.org/10.1016/j.bspc.2020.101878

  6. 6.

    Basiri ME, Nemati S (2009). A novel hybrid ACO-GA algorithm for text feature selection. In: 2009 IEEE congress on evolutionary computation, CEC 2009. IEEE, pp 2561–2568

  7. 7.

    Chakraborty S, Singh SK, Chakraborty P (2016) Local gradient Hexa pattern: a descriptor for face recognition and retrieval. IEEE Trans Circuits Syst Video Technol 28:171–180. https://doi.org/10.1109/tcsvt.2016.2603535

    Article  Google Scholar 

  8. 8.

    Chakraborty S, Singh SK, Chakraborty P (2017) Local quadruple pattern: A novel descriptor for facial image recognition and retrieval. Comput Electr Eng 62:92–104. https://doi.org/10.1016/j.compeleceng.2017.06.013

    Article  Google Scholar 

  9. 9.

    Chien JT, Wu CC (2002) Discriminant waveletfaces and nearest feature classifiers for face recognition. IEEE Trans Pattern Anal Mach Intell 24:1644–1649. https://doi.org/10.1109/TPAMI.2002.1114855

    Article  Google Scholar 

  10. 10.

    Dehmer M, Emmert-Streib F, Shi Y (2017) Quantitative graph theory: a new branch of graph theory and network science. Inf Sci (Ny) 418–419:575–580. https://doi.org/10.1016/j.ins.2017.08.009

    MathSciNet  Article  MATH  Google Scholar 

  11. 11.

    Deng W, Hu J, Guo J (2019) Compressive binary patterns: designing a robust binary face descriptor with random-field Eigenfilters. IEEE Trans Pattern Anal Mach Intell 41:758–767. https://doi.org/10.1109/TPAMI.2018.2800008

    Article  Google Scholar 

  12. 12.

    Ding C, Choi J, Tao D, Davis LS (2016) Multi-directional multi-level dual-cross patterns for robust face recognition. IEEE Trans Pattern Anal Mach Intell 38:518–531. https://doi.org/10.1109/TPAMI.2015.2462338

    Article  Google Scholar 

  13. 13.

    Ding L, Martinez AM (2010) Features versus context: an approach for precise and detailed detection and delineation of faces and facial features. IEEE Trans Pattern Anal Mach Intell 32:2022–2038. https://doi.org/10.1109/TPAMI.2010.28

    Article  Google Scholar 

  14. 14.

    Du L, Hu H (2019) Nuclear norm based adapted occlusion dictionary learning for face recognition with occlusion and illumination changes. Neurocomputing 340:133–144. https://doi.org/10.1016/j.neucom.2019.02.053

    Article  Google Scholar 

  15. 15.

    El Merabet Y, Ruichek Y (2018) Local concave-and-convex micro-structure patterns for texture classification. Pattern Recogn 76:303–322. https://doi.org/10.1016/j.patcog.2017.11.005

    Article  Google Scholar 

  16. 16.

    Fathi A, Alirezazadeh P, Abdali-Mohammadi F (2016) A new global-Gabor-Zernike feature descriptor and its application to face recognition. J Vis Commun Image Represent 38:65–72. https://doi.org/10.1016/j.jvcir.2016.02.010

    Article  Google Scholar 

  17. 17.

    Fernández A, Álvarez MX, Bianconi F (2011) Image classification with binary gradient contours. Opt Lasers Eng 49:1177–1184. https://doi.org/10.1016/j.optlaseng.2011.05.003

    Article  Google Scholar 

  18. 18.

    Gupta S, Gandhi T (2020). Identification of neural correlates of face recognition using machine learning approach. In: Advances in Intelligent Systems and Computing. Springer, pp. 13–20

  19. 19.

    Hassoon M, Kouhi MS, Zomorodi-Moghadam M, Abdar M (2017) Using PSO algorithm for producing best rules in diagnosis of heart disease. In: 2017 international conference on computer and applications, ICCA 2017. IEEE, pp 306–311

  20. 20.

    Huang J, Zhang Y, Zhang H, Cheng K (2019). Sparse Representation Face Recognition Based on Gabor and CSLDP Feature Fusion. In: Proceedings of the 31st Chinese Control and Decision Conference, CCDC 2019. pp 5697–5701

  21. 21.

    Hung TY, Fan KC (2014) Local vector pattern in high-order derivative space for face recognition. 2014 IEEE Int Conf image process ICIP 2014 23:239–243. https://doi.org/10.1109/ICIP.2014.7025047

  22. 22.

    Jabid T, Kabir MH, Chae O (2012). Local directional pattern (LDP) for face recognition. In: International journal of innovative computing, Information and Control. IEEE, pp. 2423–2437

  23. 23.

    Jain N, Kumar S, Kumar A, Shamsolmoali P, Zareapoor M (2018) Hybrid deep neural networks for face emotion recognition. Pattern Recogn Lett 115:101–106. https://doi.org/10.1016/j.patrec.2018.04.010

    Article  Google Scholar 

  24. 24.

    Kabaciński R, Kowalski M (2011) Vein pattern database and benchmark results. Electron Lett 47:1127–1128. https://doi.org/10.1049/el.2011.1441

    Article  Google Scholar 

  25. 25.

    Kagawade VC, Angadi SA (2019) Multi-directional local gradient descriptor: A new feature descriptor for face recognition. Image Vis Comput 83–84:39–50. https://doi.org/10.1016/j.imavis.2019.02.001

    Article  Google Scholar 

  26. 26.

    Kar A, Neogi PPG (2020) Triangular coil pattern of local radius of gyration face for heterogeneous face recognition. Appl Intell 50:698–716. https://doi.org/10.1007/s10489-019-01545-x

    Article  Google Scholar 

  27. 27.

    Kas M, El Merabet Y, Ruichek Y, Messoussi R (2018) Mixed neighborhood topology cross decoded patterns for image-based face recognition. Expert Syst Appl 114:119–142. https://doi.org/10.1016/j.eswa.2018.07.035

    Article  Google Scholar 

  28. 28.

    Kawulok M, Celebi ME, Smolka B (2016). Advances in face detection and facial image analysis. Springer

  29. 29.

    Kaya Y, Ertuʇrul ÖF, Tekin R (2015) Two novel local binary pattern descriptors for texture analysis. Appl Soft Comput J 34:728–735. https://doi.org/10.1016/j.asoc.2015.06.009

    Article  Google Scholar 

  30. 30.

    Kim Y, Soh JW, Cho NI (2020) AGARNet: adaptively gated JPEG compression artifacts removal network for a wide range quality factor. IEEE Access 8:20160–20170. https://doi.org/10.1109/ACCESS.2020.2968944

    Article  Google Scholar 

  31. 31.

    Kostinger M, Hirzer M, Wohlhart P, et al (2012). Large scale metric learning from equivalence constraints. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, pp. 2288–2295

  32. 32.

    Krol M, Florek A (2008). Comparison of statistical classifiers as applied to the face recognition system based on active shape models. In: Computer Recognition Systems. Springer, pp. 791–797

  33. 33.

    Kumar D, Garain J, Kisku DR, Sing JK, Gupta P (2020). Unconstrained and constrained face recognition using dense local descriptor with ensemble framework. Neurocomputing. https://doi.org/10.1016/j.neucom.2019.10.117

  34. 34.

    Li Z, Yu P, Yan H, Jiang Y (2020) Face recognition based on local binary pattern auto-correlogram. In: Smart innovation. Springer, Systems and Technologies, pp 333–340

    Google Scholar 

  35. 35.

    Liang Y, Zhang Y, Zeng XX (2017) Pose-invariant 3D face recognition using half face. Signal Process Image Commun 57:84–90. https://doi.org/10.1016/j.image.2017.05.004

    Article  Google Scholar 

  36. 36.

    Liao M, Gu X (2020) Face recognition approach by subspace extended sparse representation and discriminative feature learning. Neurocomputing 373:35–49. https://doi.org/10.1016/j.neucom.2019.09.025

    Article  Google Scholar 

  37. 37.

    Liu L, Fieguth P, Zhao G, Pietikäinen M, Hu D (2016) Extended local binary patterns for face recognition. Inf Sci (Ny) 358–359:56–72. https://doi.org/10.1016/j.ins.2016.04.021

    Article  Google Scholar 

  38. 38.

    Liu S, Wang Y, Wu X, Li J, Lei T (2020). Discriminative dictionary learning algorithm based on sample diversity and locality of atoms for face recognition J Vis Commun Image Represent 102763. https://doi.org/10.1016/j.jvcir.2020.102763, 71

  39. 39.

    Luo X, Xu Y, Yang J (2019) Multi-resolution dictionary learning for face recognition. Pattern Recogn 93:283–292. https://doi.org/10.1016/j.patcog.2019.04.027

    Article  Google Scholar 

  40. 40.

    Lv JJ, Huang JS, Zhou XD, Zhou X, Feng Y (2016) Latent face model for across-media face recognition. Neurocomputing 216:735–745. https://doi.org/10.1016/j.neucom.2016.08.036

    Article  Google Scholar 

  41. 41.

    Martinez AM, Benavente R (1998). The AR face database. CVC Tech Rep 24%6:%&. https://doi.org/10.1023/B:VISI.0000029666.37597

  42. 42.

    Martinez AM, Kak AC (2001) PCA versus LDA. IEEE Trans Pattern Anal Mach Intell 23:228–233. https://doi.org/10.1109/34.908974

    Article  Google Scholar 

  43. 43.

    Masi I, Wu Y, Hassner T, Natarajan P (2019). Deep face recognition: A Survey. In: proceedings - 31st conference on graphics, Patterns and Images, SIBGRAPI 2018. pp. 471–478

  44. 44.

    Moustafa AA, Elnakib A, Areed NFF (2020). Age-invariant face recognition based on deep features analysis. Signal, image video process 1–8. https://doi.org/10.1007/s11760-020-01635-1

  45. 45.

    Murala S, Maheshwari RP, Balasubramanian R (2012) Local tetra patterns: a new feature descriptor for content-based image retrieval. IEEE Trans Image Process 21:2874–2886. https://doi.org/10.1109/TIP.2012.2188809

    MathSciNet  Article  MATH  Google Scholar 

  46. 46.

    Nemati S, Basiri ME, Ghasem-Aghaee N, Aghdam MH (2009) A novel ACO-GA hybrid algorithm for feature selection in protein function prediction. Expert Syst Appl 36:12086–12094. https://doi.org/10.1016/j.eswa.2009.04.023

    Article  Google Scholar 

  47. 47.

    Ojala T, Pietikäinen M, Mäenpää T (2001). A generalized local binary pattern operator for multiresolution gray scale and rotation invariant texture classification. In: lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics). Springer, pp 397–406

  48. 48.

    Ojala T, Pietikäinen M, Mäenpää T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24:971–987. https://doi.org/10.1109/TPAMI.2002.1017623

    Article  MATH  Google Scholar 

  49. 49.

    Ou W, Luan X, Gou J, Zhou Q, Xiao W, Xiong X, Zeng W (2018) Robust discriminative nonnegative dictionary learning for occluded face recognition. Pattern Recogn Lett 107:41–49. https://doi.org/10.1016/j.patrec.2017.07.006

    Article  Google Scholar 

  50. 50.

    Ou W, You X, Tao D, Zhang P, Tang Y, Zhu Z (2014) Robust face recognition via occlusion dictionary learning. Pattern Recogn 47:1559–1572. https://doi.org/10.1016/j.patcog.2013.10.017

    Article  Google Scholar 

  51. 51.

    Peng F, Qin L, Long M (2020). Face presentation attack detection based on chromatic co-occurrence of local binary pattern and ensemble learning. J Vis Commun Image Represent 66:102746. https://doi.org/10.1016/j.jvcir.2019.102746

  52. 52.

    P Pławiak, M Abdar, UR Acharya (2019). Application of new deep genetic Cascade ensemble of SVM classifiers to predict the Australian credit scoring; Elsevier, applied soft computing; 84(2019):105740

  53. 53.

    P Pławiak, M Abdar, J Pławiak, V Makarenkov, UR Acharya (2020). DGHNL: a new deep genetic hierarchical network of learners for prediction of credit scoring; Elsevier, information sciences; 516(2020):401–418

  54. 54.

    Pławiak P, Tadeusiewicz R (2014) Approximation of phenol concentration using novel hybrid computational intelligence methods. Int J Appl Math Comput Sci 24(1):165–181. https://doi.org/10.2478/amcs-2014-0013

    Article  MATH  Google Scholar 

  55. 55.

    Pourpanah F, Lim CP, Wang X, Tan CJ, Seera M, Shi Y (2019) A hybrid model of fuzzy min–max and brain storm optimization for feature selection and data classification. Neurocomputing 333:440–451. https://doi.org/10.1016/j.neucom.2019.01.011

    Article  Google Scholar 

  56. 56.

    Pourpanah F, Shi Y, Lim CP, Hao Q, Tan CJ (2019) Feature selection based on brain storm optimization for data classification. Appl Soft Comput J 80:761–775. https://doi.org/10.1016/j.asoc.2019.04.037

    Article  Google Scholar 

  57. 57.

    Rajput S, Bharti J (2016) A face recognition using linear-diagonal binary graph pattern feature extraction method. Int J Found Comput Sci Technol 6:55–65. https://doi.org/10.5121/ijfcst.2016.6205

    Article  Google Scholar 

  58. 58.

    Rakshit RD, Nath SC, Kisku DR (2018) Face identification using some novel local descriptors under the influence of facial complexities. Expert Syst Appl 92:82–94. https://doi.org/10.1016/j.eswa.2017.09.038

    Article  Google Scholar 

  59. 59.

    Ramya R, Srinivasan K (2020). Real time palm and finger detection for gesture recognition using convolution neural network. In: Human Behaviour Analysis Using Intelligent Systems. Springer, pp. 1–19

  60. 60.

    Riccio D, Dugelay JL (2007) Geometric invariants for 2D/3D face recognition. Pattern Recogn Lett 28:1907–1914. https://doi.org/10.1016/j.patrec.2006.12.017

    Article  Google Scholar 

  61. 61.

    Rzecki K, Pławiak P, Niedźwiecki M, Sośnicki T, Leśkow J, Ciesielski M (2017) Person recognition based on touch screen gestures using computational intelligence methods. Inf Sci (Ny) 415–416:70–84. https://doi.org/10.1016/j.ins.2017.05.041

    Article  Google Scholar 

  62. 62.

    Samaria FS, Harter AC (1994). Parameterisation of a stochastic model for human face identification. In: IEEE Workshop on Applications of Computer Vision - Proceedings. IEEE, pp. 138–142

  63. 63.

    Schmidt A, Kasinski A (2009). The performance of two deformable shape models in the context of the face recognition. In: lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics). Springer, pp 400–409

  64. 64.

    Schroff F, Kalenichenko D, Philbin J (2015). FaceNet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 815–823

  65. 65.

    Shi Y (1994) The number of cycles in a Hamilton graph. Discret Math 133:249–257

    MathSciNet  Article  Google Scholar 

  66. 66.

    Song K, Yan Y, Zhao Y, Liu C (2015) Adjacent evaluation of local binary pattern for texture classification. J Vis Commun Image Represent 33:323–339. https://doi.org/10.1016/j.jvcir.2015.09.016

    Article  Google Scholar 

  67. 67.

    Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9:293–300. https://doi.org/10.1023/A:1018628609742

    Article  Google Scholar 

  68. 68.

    Tadeusiewicz R (2015) Neural networks as a tool for modeling of biological systems. BIO-ALGORITHMS AND MED-SYSTEMS 11(3):135–144. https://doi.org/10.1515/bams-2015-0021

    Article  Google Scholar 

  69. 69.

    Tan X, Triggs B (2010) Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans Image Process 19:1635–1650. https://doi.org/10.1109/TIP.2010.2042645

    MathSciNet  Article  MATH  Google Scholar 

  70. 70.

    Tang Z, Wu X, Fu B, Chen W, Feng H (2018) Fast face recognition based on fractal theory. Appl Math Comput 321:721–730. https://doi.org/10.1016/j.amc.2017.11.017

    MathSciNet  Article  MATH  Google Scholar 

  71. 71.

    Tuncer T, Dogan S, Pławiak P, Rajendra Acharya U (2019) Automated arrhythmia detection using novel hexadecimal local pattern and multilevel wavelet transform with ECG signals. Knowledge-Based Syst 186:104923. https://doi.org/10.1016/j.knosys.2019.104923

    Article  Google Scholar 

  72. 72.

    Vazquez-Fernandez E, Gonzalez-Jimenez D (2016) Face recognition for authentication on mobile devices. Image Vis Comput 55:31–33. https://doi.org/10.1016/j.imavis.2016.03.018

    Article  Google Scholar 

  73. 73.

    Vishnu Priya R, Vijayakumar V, Tavares JMRS (2020) MQSMER: a mixed quadratic shape model with optimal fuzzy membership functions for emotion recognition. Neural Comput Appl 32:3165–3182. https://doi.org/10.1007/s00521-018-3940-0

    Article  Google Scholar 

  74. 74.

    Vishwakarma VP, Dalal S (2020) A novel non-linear modifier for adaptive illumination normalization for robust face recognition. Multimed Tools Appl 79:11503–11529. https://doi.org/10.1007/s11042-019-08537-6

    Article  Google Scholar 

  75. 75.

    Vu NS, Caplier A (2012) Enhanced patterns of oriented edge magnitudes for face recognition and image matching. IEEE Trans Image Process 21:1352–1365. https://doi.org/10.1109/TIP.2011.2166974

    MathSciNet  Article  MATH  Google Scholar 

  76. 76.

    Vu NS, Dee HM, Caplier A (2012) Face recognition using the POEM descriptor. Pattern Recogn 45:2478–2488. https://doi.org/10.1016/j.patcog.2011.12.021

    Article  Google Scholar 

  77. 77.

    Wang JW, Le NT, Lee JS, Wang CC (2018) Illumination compensation for face recognition using adaptive singular value decomposition in the wavelet domain. Inf Sci (Ny) 435:69–93. https://doi.org/10.1016/j.ins.2017.12.057

    MathSciNet  Article  Google Scholar 

  78. 78.

    Weeks AR (1996). Fundamentals of electronic image processing. SPIE Optical Engineering Press

  79. 79.

    Wen ZK, Zhu WZ, Ouyang-Jie, et al (2010). A robust and discriminative image perceptual hash algorithm. In: Proceedings - 4th International Conference on Genetic and Evolutionary Computing, ICGEC 2010. IEEE, pp 709–712

  80. 80.

    Xu Z, Jiang Y, Wang Y, Zhou Y, Li W, Liao Q (2019) Local polynomial contrast binary patterns for face recognition. Neurocomputing 355:1–12. https://doi.org/10.1016/j.neucom.2018.09.056

    Article  Google Scholar 

  81. 81.

    Yang B, Gu F, Niu X (2006). Block mean value based image perceptual hashing. In: proceedings - 2006 international conference on intelligent information hiding and multimedia signal processing, IIH-MSP 2006. IEEE, pp 167–170

  82. 82.

    Yee SY, Rassem TH, Mohammed MF, Awang S (2020). Face recognition using Laplacian completed local ternary pattern (LapCLTP). In: Lecture Notes in Electrical Engineering. Springer, pp. 315–327

  83. 83.

    Youbi Z, Khider A, Boubchir L, et al (2019). Novel Approach of Face Identification Based on Multi-scale Local Binary Pattern. 2018 Int Conf signal, image, Vis their Appl SIVA 2018 1:11–14. https://doi.org/10.1109/SIVA.2018.8661005

  84. 84.

    Zhou LF, Du YW, Li WS et al (2018) Pose-robust face recognition with Huffman-LBP enhanced by divide-and-rule strategy. Pattern Recogn 78:43–55. https://doi.org/10.1016/j.patcog.2018.01.003

    Article  Google Scholar 

  85. 85.

    Zhou X, Jin K, Xu M, Guo G (2019) Learning deep compact similarity metric for kinship verification from face images. Inf Fusion 48:84–94. https://doi.org/10.1016/j.inffus.2018.07.011

    Article  Google Scholar 

  86. 86.

    Zhou L, Wang H, Lin S, Hao S, Lu ZM (2020) Face recognition based on local binary pattern and improved pairwise-constrained multiple metric learning. Multimed Tools Appl 79:675–691. https://doi.org/10.1007/s11042-019-08157-0

    Article  Google Scholar 

  87. 87.

    Zomorodi-moghadam M, Abdar M, Davarzani Z, Zhou X, Pławiak P, Acharya UR (2019). Hybrid particle swarm optimization for rule discovery in the diagnosis of coronary artery disease Expert Syst e12485. https://doi.org/10.1111/exsy.12485

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Paweł Pławiak.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Tuncer, T., Dogan, S., Abdar, M. et al. A novel facial image recognition method based on perceptual hash using quintet triple binary pattern. Multimed Tools Appl 79, 29573–29593 (2020). https://doi.org/10.1007/s11042-020-09439-8

Download citation

Keywords

  • Face recognition
  • Quintet triple binary pattern
  • Perceptual hash
  • Machine learning
  • Biometrics