1 Introduction

Biometric systems are widely used in access control and security-based applications. The goal of the biometric system is to utilize physical and/or behavior characteristics to identify/verify the subject of interest. There exist various kinds of biometric systems that are based on physical and/or behavioral cues such as the face, iris, speech, key-stroke, palmprint, retina, and so on. Among these, the palmprint-based biometric system that has been investigated for over 15 years has demonstrated its applicability as a successful biometric modality. Palmprints exhibit a unique characteristic that can be characterized using texture features that are contributed due to the presence of palm creases, wrinkles, and ridges. Furthermore, the palmprints can be captured using low-cost sensors with a very low-resolution imaging of 75 dots-per-inch (dpi) [1, 2]. Further, recent work [3] has demonstrated the anti-spoofing nature of palmprints that places the palmprint as a highly reliable biometric characteristic.

The increasing popularity of the palmprint biometrics has resulted in various feature extraction techniques that have contributed to boosting the accuracy of palmprint verification. The available techniques can be broadly classified into the following five types, namely: (1) local feature-based approaches, (2) statistical-based approaches, (3) appearance based approaches, (4) texture based approaches, and (5) hybrid approaches. The local feature extraction techniques are based on extracting the feature such as ridges that include both delta points, minutiae, and palm creases (or principle lines). The local features from the palmprint can be extracted using various techniques that includes line segment approach [4], morphological median wavelet [5], Sobel operator [6], Canny operators [6], Plessy operator [7], and wide-line detection operator [8]. Even though the local features are proven to achieve the accurate performance, these methods demand very high-resolution palmprint images to be captured and thereby increases the cost of the sensor. The statistical-based approaches are based on extracting the features that correspond to mean, variance, moments, and energy. There exist various techniques to capture the statistics of the palmprint that includes wavelet transform [9], Fourier transform [10], Cepstrum energy [11], sub-block energy based on Gabor transform [12, 13], micro-scale invariant Gabor [14], Zernike moments [15]. However, the use of the statistics-based approaches are not robust against the sensor noise. The appearance-based approaches perform the data mapping from high dimension to low dimension to achieve high accuracy as well as speed in comparison. The most popular appearance-based techniques includes Principal Component Analysis (PCA) [16], 2DPCA [17], bidirectional PCA [18], (2D)2PCA [19], independent component analysis (ICA) [20], linear discriminant analysis (LDA) [21], kernel-based approaches like kernel discriminant analysis (KDA) [13], kernel PCA (KPCA) [22], and generative model-based approaches, namely: PCA mixture model(PCAMM) and ICA mixture model (ICAMM) [23]. Even though the use of the appearance model can perform equally well with the statistics approach, it still lacks the robustness against variation in noise as well as variation in palmprint templates with time. The texture-based schemes normally extract the global patterns of lines, ridges, and wrinkles that constitute for the robust palmprint recognition. Among the available texture extraction schemes, the use of local binary patterns (LBP) [24], Gabor transform [13], palmcode [25], ordinal code [26], fusion code [27], competitive code [28], and contour code [29] have shown to perform accurately even on low-resolution palmprint images. The hybrid scheme [30, 31] combines more than one of the above-mentioned schemes so that it can address shortcomings of individual schemes. When compared to all five different types of schemes, the hybrid schemes appear to be more robust and accurate for the palmprint recognition. Table 1 shows the characteristics of the existing palmprint feature extraction schemes in terms of computation complexity and accuracy. The detailed survey on the palmprint recognition can be found in [32, 33].

Table 1 Characteristics of palmprint recognition approaches

In this work, we propose a simple and novel approach for palmprint verification based on the sparse representation of features derived from the Bank of Binarized Statistical Image Features (B-BSIF) [34]. BSIF [34] is a texture descriptor similar to the LBP, but the difference lies in the way the filters are learned. In the case of BSIF, filters are learned from the natural images while the LBP filters are manually predefined. To the best of our knowledge, no work has been reported in the literature which uses Binarized Statistical Image Features (BSIF) for palmprint verification. With this backdrop, in our previous work [35], we made an initial attempt towards the sparse representation of BSIF. In this paper, the same work is extended in many directions. By exploiting the idea of B-BSIF, filters that include 56 different filters allowed us to reduce significantly the equal error rate (EER). Overall, the following are the main contributions of this work:

  • A new method based on the Bank of BSIF (B-BSIF) and sparse representation classifier (SRC) for palmprint recognition.

  • Extensive experiments are carried out on the following three different palmprint databases, namely: PolyU contact palmprint database [36] with 356 subjects, IIT Delhi contactless palmprint database [37] with 236 subjects, and Multispectral palmprint PolyU database [3] with 500 subjects.

  • Comprehensive analysis by comparing the proposed scheme with seven different state-of-the-art contemporary schemes based on LBP [24], palmcode [25], ordinal code [26], fusion code [27], Gabor transform with KDA [13], Gabor transform with sparse representation [38], and also with our previously proposed technique based on the sparse representation of BSIF [35].

All in all, the proposed framework, being a simple, novel, and first of its kind in the literature of palmprint verification/recognition, is expected to open up a new dimension for further research in the field of palmprint biometrics.

The rest of the paper is structured as follows: Section 2 presents the proposed scheme for robust palmprint recognition, Section 3 discusses the experimental setup, protocols, and results, and Section 4 draws the conclusion.

2 Proposed method

Figure 1 shows the block diagram of the proposed Bank of BSIF and sparse representation classifier (SRC) based scheme for palmprint recognition. The proposed scheme can be structured in two main steps.

Fig. 1
figure 1

Block diagram of the proposed method

2.1 Region of interest extraction

The main idea of the region of interest (RoI) is to extract the significant region from the palmprint that constitutes for the rich set of features such as principal lines, ridges, and wrinkles by compensating for rotation and translation. The accurate extraction of RoI plays a crucial role in improving the performance of the overall palmprint recognition. In this work, we have employed the algorithm proposed in [23] which is based on aligning the palmprint by computing the center of mass and also by locating the valley regions. We carried out this RoI extraction scheme only on the PolyU palmprint database as the other two databases (MSPolyU and IITD) have already provided the RoI images.

2.2 Bank of BSIF features and sparse representation classifier

The idea behind the proposed B-BSIF is to construct a bank of filters that is trained using a set of natural images. Traditionally, one can train B-BSIF filters in an unsupervised manner using the most popular techniques, namely: Restricted Boltzmann Machines (RBMs) [39, 40], auto-encoders [41], sparse coding [42], and independent component analysis [43, 44]. Among these schemes, the use of ICA is a more appealing choice as it overcomes the tuning of large sets of hyper-parameters and can also provide a statistically independent basis that in turn can be used as the filter to extract the features from the given image. Thus, given the natural images, we first normalize to have a zero mean and unit variance [34]. Then, we sample N Im number of patches to learn the BSIF filters using ICA. Thus, the size of the image patch sampled from natural images will fix the size of the BSIF filter to be learned and the selection of the number of top ICA basis will indicate the length of the BSIF filter. For instance, BSIF filter with the size 5×5 and length 8 corresponds to the top 8 basis of the ICA algorithm learned using an image patch size of 5×5 sampled from the natural image. Thus, by varying both size and length, one can learn some BSIF filters from the natural images. In this work, we consider 56 different pre-learned filters with varying size and length that can constitute a Bank of BSIF (B-BSIF) filter.

In this work, we have employed the open-source filters [34] that are learned using 50,000 image patterns randomly sampled from 13 different natural images [45]. The learning process to construct these statistically independent filters has three main steps (1) mean subtraction of each patch, (2) dimensionality reduction using principle component analysis (PCA), (3) estimation of statistically independent filters (or basis) using independent component analysis (ICA). Thus, given the palmprint image I P (m,n) and a BSIF filter \(W_{i}^{k \times k}\), the filter response is obtained as follows [34]:

$$ r_{i} =\sum\limits_{m,n} I_{P}(m, n) \ast W_{i}^{k \times k}(m,n) $$
((1))

Where × denotes convolution operation, m and n denote the size of the palmprint image patch, and \(W_{i}^{k \times k}\), ∀i={1,2,…,L} denotes the length of the BSIF filter and k×k indicates the size of the BSIF filter, whose response can be computed together and binarized to obtain the binary string as follows [34]:

$$ b_{i} = \begin{cases} 1, & \text{if}\ r_{i} > 0 \\ 0, & \text{otherwise} \end{cases} $$
((2))

Finally, the BSIF features are extracted by considering each single pixel (m, n) as a set of binary values obtained from the L number of linear filters. Mathematically, for a given pixel (m, n) and its corresponding binary representation b i (m,n), BSIF encoded features are obtained as follows:

$$ {BSIF}_{i}^{k \times k} (m,n) = \sum_{i = 1}^{L} \left(b_{i} (m,n) \times (2^{i-1})\right) $$
((3))

The whole procedure of BSIF extraction is illustrated in Fig. 2 by considering a BSIF filter \( W_{8}^{17 \times 17}\) which is of length 8 and size of 17×17. Figure 2 a indicates the input ROI of the palmprint image. Figure 2 b shows the learned BSIF filter with a size 17×17 and of length 8. Figure 2 c shows the results of the individual convolution of the palmprint image with BSIF filter as mentioned in Eq. 1. Figure 2 d shows the final BSIF feature encoded using Eq. 3 obtained on the palmprint ROI shown in Fig. 2 a.

Fig. 2
figure 2

Illustration of BSIF extraction. (a) ROI of the palmprint image (b) BSIFfilter with a size 17X17 and of length 8 (c) BSIF features (d) final BSIF feature encoded

To achieve the good performance in palmprint recognition using BSIF, we need to consider two important factors, namely: filter size and filter length. However, the use of a single filter with a fixed length may not be capable of capturing sufficient information to achieve accurate palmprint recognition. Thus, in this work, we propose to use the bank of filters with varying filter size and length. The filter size is varied from 5×5 to 17×17 in steps of two such that we have filters of seven different sizes. In a similar manner, we also vary the length of the filter (or a number of the independent components) from 5 till 12 in steps of 1 to get 8 different lengths. Thus, our ensemble has 7×8=56 filters such that the response for palmprint image is obtained independently. Given a palmprint sample P(m,n), we get 56 independent BSIF coded images R P ={R P1,R P2,…,R P56}.

Figure 3 illustrates various BSIF filters that are included in the proposed B-BSIF scheme. Figure 4 shows the qualitative results on the example palmprint with varying filter size with a fixed bit length of 8. It is interesting to observe here that as the filter size increases the distinctive information about the coarse palm lines also increases. Thus, the use of the various lengths of filters will also result in capturing various information from the palmprint. Furthermore, the variation in wrinkles and ridges among different palmprints can be accurately characterized using a bank of BSIF filter rather than using a single BSIF filter.

Fig. 3
figure 3

Illustration of BSIF filters with various lengths and sizes. a 5 × 5, 8 bit, b 7 × 7, 10 bit c, 17 × 17, 12 bit learned from natural images

Fig. 4
figure 4

Qualitative results of BSIF with different path size and fixed bit length of 8 bits. a Input palmprint image, b 3×3,c 5×5,d 7×7,e 9×9,f 11×11,g 13×13,h 15×15,i 17×17

Given the palmprint sample I P (m,n), we obtain the response of I P (m,n) to all BSIF filters in B-BSIF; we then perform the sparse representation of these obtained response individually on each filter. Thus, the sparse representation of features obtained from each filter in the B-BSIF can be carried out as follows:

  1. 1.

    Given the reference palmprint samples, we first extract the BSIF features (corresponding to one filter) and construct a training T r for all C classes (or subjects) as follows:

    $$ T_{r} = \left[T_{r1}, T_{r2}, \ldots,T_{rC}\right] \in \Re^{N \times \left(n_{u}.C\right)} $$
    ((4))

    Where, n u denotes the number of reference samples for each class and N indicates the dimension of the BSIF features obtained on n u reference samples from C classes (or subjects).

  2. 2.

    Given that the test (or probe) sample T e obtains the BSIF features (corresponding to same filter as above) that can be considered as a linear combination of the training vectors as:

    $$ T_{e} = T_{r}\alpha $$
    ((5))

    Where,

    $$ \alpha = \left[ \alpha_{1}, \ldots, \alpha_{1n_{u}}|,\alpha_{2}, \ldots, \alpha_{2n_{u}}|, \ldots, |\alpha_{C1}, \ldots,\alpha_{{Cn}_{u}}\right] $$
    ((6))
  3. 3.

    Solve l 1 minimization problem [46] as follows:

    $$ \hat\alpha = \arg \min_{\alpha^{'} \in \Re^{N}} \| \alpha^{'}\|_{1} T_{e} = T_{r}\alpha^{'} $$
    ((7))
  4. 4.

    Calculate the residual as follows:

    $$ r_{c}(y) = \|T_{e} - \Pi_{C}(\alpha^{'})\|_{2} $$
    ((8))
  5. 5.

    Finally, obtain the comparison score as the residual errors to compute the performance of the overall system.

Finally, we repeat the above mentioned steps 1–5 on all 56 different BSIF filters in the bank and obtain the final comparison score that corresponds to the minimum of residual errors obtained on all 56 filters in the bank.

3 Experimental results and discussion

This section presents the experimental results obtained on the proposed scheme for palmprint recognition. Extensive experiments are carried out on three different large-scale publicly available palmprint databases such as: (1) PolyU palmprint database [36], (2) IIT Delhi palmprint database [37], and (3) Multispectral palmprint PolyU database [3]. All the experimental results are presented in terms of equal error rate (EER), and we also present the statistical validation of the results with 90 % confidence interval [13]. In the following section, we present the experimental protocol adopted in this work.

3.1 Assessment protocol

This section describes the evaluation protocol adopted in this work on three different palmprint databases that are the same with our previous paper [35].

PolyU palmprint database This database comprises of 352 subjects such that each subject has ten samples collected in two different sessions. For our experiments, we consider all ten samples from the first session as a reference and all samples from the second session as probe samples. The database is available for research purposes at http://www4.comp.polyu.edu.hk/~biometrics/MultispectralPalmprint/MSP.htm.

IIT Delhi palmprint database This database consists of 235 subjects with both left and right palmprint samples. Each subject has five samples captured independently from both left and right palmprints. To evaluate this database, we consider four samples as the reference and remaining one sample as the probe sample. We repeat this selection of reference and probe samples using leaving-one-out cross validation with k = 10, and finally, we present the result by averaging the performance overall ten runs. The database is available for research purposes at http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Palm.htm.

Multi-Spectral PolyU palmprint database This database consists of 500 subjects whose palmprint samples are captured in two different sessions in four different spectra: blue, red, green, and near infrared (NIR). Each session has six samples per subject. Thus, we select samples from first session as reference samples while we select second session samples as a probe. We repeat this procedure for all four spectral bands, and results are presented independently. The database is available for research purposes at http://www4.comp.polyu.edu.hk/~biometrics/MultispectralPalmprint/MSP.htm.

3.2 Results and discussion

Figure 5 shows the qualitative results of the proposed scheme along with the five different state-of-the-art schemes employed in this work. It can be observed here that, the use of BSIF features appears to capture more accurate palmprint features that are characterized in terms of ridges and wrinkles. This qualitative result shows the superiority of the proposed BSIF features for palmprint recognition.

Fig. 5
figure 5

Illustration of a palmprint sample, b BSIF (17 X 17, 8 bit), c LBP, d palmcode, e fusion code, f Log-Gabor (LG) transform

Table 2 shows the performance of the proposed scheme based on B-BSIF and SRC on the PolyU palmprint database. It can be observed here that, the proposed scheme shows the best performance with an EER of 4.06 % and thereby indicates the performance improvement over 2 % compared to our previous scheme [35] based on single BSIF features. This further justifies the applicability of the proposed scheme for palmprint recognition.

Table 2 Performance of the proposed method on PolyU palmprint database

Table 3 tabulates the quantitative performance of the proposed scheme on the IITD contactless palmprint database. Here, we present the results individually on both left and right palmprint samples. As noticed from the Table 3, the proposed scheme shows the outstanding performance with an EER of 0.12 % on the left palmprint samples and 0.72 % on the right palmprint samples. This indicates the performance improvement of over 1 % compared to our previous scheme [35] based on single BSIF features. This further justifies the applicability of the proposed scheme for palmprint recognition on yet another kind of database where palmprint samples are captured in a contactless fashion.

Table 3 Performance of the proposed method on IIT Delhi palmprint database

Table 4 shows the performance of the proposed scheme on the multi-spectral PolyU palmprint database. Here also it can be noted that the proposed scheme has achieved the outstanding performance with an EER of 0 %. These results further justify the applicability of the proposed scheme on different palmprint samples that are captured with different spectral bands.

Table 4 Performance of the proposed method on MS PolyU palmprint database

Thus, from the above experiments, it can be observed that the proposed scheme has shown the best performance when compared with five well-established state-of-the-art schemes for the palmprint recognition. Further, the performance achieved using the proposed scheme on three different databases justifies its robustness and applicability of the palmprint recognition.

Table 5 shows the computation time of the various algorithms used in this work. All the algorithms are developed in a Matlab software running on a PC with Intel i7 processor—8 Gb RAM and Windows 7. Note that all the described algorithms are implemented and not optimized to run fast hence computation time provided in the Table 5 is just a reference.

Table 5 Computation time of different algorithm used in this work

4 Conclusions

Accurate representation of the features plays a vital role in improving the accuracy and reliability of the palmprint recognition. In this paper, we have introduced a novel approach for the palmprint recognition based on B-BSIF and SRC. The main idea of the proposed method is to use multiple BSIF filters with various size and length to constitute an ensemble (or bank of BSIF filters). Since each of these BSIF filters are learned on the natural images using the independent component analysis (ICA), they exhibit the property of statistical independence. We proposed to build the B-BSIF with 56 different BSIF filters. Then, each of these filters is associated with the SRC that essentially perform the sparse representation of each BSIF filter. Thus, given a palmprint sample, we obtain its response on each of the BSIF filter and then obtain the corresponding comparison score using SRC. Finally, we select the best comparison score that corresponds to the minimum value of the residual error. The proposed method is validated by conducting extensive experiments on three different large-scale publicly available databases that indicated the outstanding performance. The performance of the proposed scheme is compared with seven well-established state-of-the-art schemes. The obtained results justify that the proposed scheme has emerged as an efficient and robust tool for accurate palmprint recognition.