Multimodal Biometric Authentication System Based on Score-Level Fusion of Palmprint and Finger Vein

  • C. Murukesh
  • K. Thanushkodi
  • Padmanabhan Preethi
  • Feroze Naina Mohamed
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 324)

Abstract

Multimodal biometrics plays a major role in our day-to-day life to meet the requirements with the well-grown population. In this paper, palmprint and finger vein images are fused using normalization scores of the individual traits. Palmprint features extracted from the discrete cosine transform (DCT) are classified by using multi-class linear discriminant analysis (LDA) and self-organizing maps (SOM). Finger vein identification is designed and developed by using repeated line tracking method to extract the patterns. A multimodal biometric authentication system integrates information from multiple biometric sources to compensate for the limitations in performance of each individual biometric system. These systems can significantly improve the recognition performance of a biometric system apart from catalyzing population coverage, impeding spoof attacks, increasing the degrees of freedom, and reducing the failure rates.

Keywords

Multimodal biometrics DCT Multi-class LDA SOM 

1 Introduction

In recent years, multimodal biometrics has gained the substantial attention to all organizations and more than one biometric is fused together. Vein patterns serve a highly secured authentication system over other biometrics. It is noninvasive, reliable, and well accepted by users [1]. Preprocessing of finger vein images yields a better quality image by removing the noise and increasing the image contrast [2]. Acquisition of infrared finger vein image using various LED contains not only the vein patterns but also irregular shading produced by the various thicknesses of the finger bones and muscles [3]. The finger vein pattern from the unclear images is extracted by using line tracking, which starts from various positions. A person retrieval solution using finger vein can be accomplished by searching an image in the database in a reasonable time [4]. A wide line detector for feature extraction can obtain precise width information of the finger vein and improve the inferences of the extracted feature from low-quality image [5]. The finger vein patterns extracted by using gradient-based threshold and maximum curvature points are applied to neural network to train and test the quality of system [6, 7]. Extracting the finger vein patterns regardless of vein thickness or brightness is necessary for accurate personal identification [8]. Face and finger vein biometric authentication system at multi-level score-level fusion is very efficient to reduce the false rejection rate [9]. Multiple features like texture (gabor), line, and appearance (PCA) features extracted from the palmprint images are fused using particle swarm optimization techniques to improve the performance [10, 11, 12]. The wavelet-based fusion technique is suggested to fuse extracted features as it contains wavelet extensions and uses mean–max fusion method to overcome the problem of feature fusion [13].

The rest of this paper is organized as follows: Sect. 2 describes the palmprint recognition system. Section 3 highlights the feature extraction algorithm for finger vein authentication system. Score-level fusion is discussed in Sect. 4. Section 5 provides experimental results of the proposed system, and Sect. 6 offers the conclusion.

2 Palmprint Authentication

This paper designs the most efficient, high-speed method for palmprint recognition and also develops an algorithm for the palmprint recognition system which formulates an image-based approach, using the 2-dimensional discrete cosine transform (2D-DCT) for image compression and a combination of multi-class linear discriminant analysis (LDA) and self-organizing map (SOM) neural network [7] for recognition purpose.

2.1 Image Compression

The image compressed using 2D blocked discrete cosine transform (DCT) is applied with a mask, and high coefficients in the image are discarded.

The 2D-DCT is defined as
$$ X[k_{1} ,k_{2} ] = \alpha [k_{1} ]\alpha [k_{2} ]\sum\limits_{n = 0}^{{N_{1} - 1}} {\sum\limits_{n = 0}^{{N_{2} - 1}} {x[n_{1} ,n_{2} ]} } \cos \left( {\frac{{\pi (2n_{2} + 1)k_{1} }}{{2N_{1} }}} \right)\cos \left( {\frac{{\pi (2n_{2} + 1)k_{2} }}{{2N_{2} }}} \right) $$
(1)
for k1 = 0, 1, … ,N1−1 and k2 = 0, 1, … ,N2-1
The 2D-IDCT is given by
$$ x[n_{1} ,n_{2} ] = \sum\limits_{n = 0}^{{N_{1} - 1}} {\sum\limits_{n = 0}^{{N_{2} - 1}} {} } \alpha [k_{1} ]\alpha [k_{2} ]x[k_{1} ,k_{2} ]{ \cos }\left( {\frac{{\pi (2n_{2} + 1)k_{1} }}{{2N_{1} }}} \right)\cos \left( {\frac{{\pi (2n_{2} + 1)k_{2} }}{{2N_{2} }}} \right) $$
(2)

For n1 = 0, 1… N1 − 1 and n2 = 0, 1… N2 − 1

Mathematically, the DCT is perfectly reversible and there is no loss of image definition until coefficients are quantized. Since the DCT algorithm is used for JPEG image compression, the input image is firstly divided into 8 × 8 blocks and each block is quantized separately by discarding redundant information [14]. The receiver decodes the quantized DCT coefficients of each block separately and computes the 2D-IDCT of each block. The resultant palmprint image shown in the Fig. 1b is compressed image, which is blurred due to the loss of quality, evidently showing the block structure. The image is reshaped into single column and fed into neural network.
Fig. 1

a Palmprint image and b compressed image using 2D-DCT

2.2 Multi-class LDA

FLDA tries to find a mapping from the high-dimensional space to a low-dimensional space in which the most discriminant features are preserved [15, 16]. It achieves by minimizing the variation within the same class and maximizing the variation between classes. The between-class scatter matrix is given by
$$ S_{\text{B}} = \sum\limits_{i = 1}^{n} {\lambda_{i} (\upmu_{\text{i}} -\upmu)(\upmu_{\text{i}} -\upmu)}^{\text{T}} $$
(3)
Consider that each pattern in the learning set belongs to one of the n patterns \( \left( {m_{\text{1}} ,m_{2} , \ldots m_{n} } \right) \). From the patterns given above, the within-class scatter matrix is defined by
$$ S_{\text{W}} = \sum\limits_{i = 1}^{n} {\sum\limits_{{X_{k} \in m_{i} }} {(X_{k} -\upmu_{\text{i}} )(X_{k} -\upmu_{i} )^{\text{T}} } } $$
(4)
where \( \mathop\upmu\nolimits_{i} \) is the mean of class i, mi is the number of cases, and the superscript T indicates a transpose action. The objective of FLDA is then to find \( {\text{U}}_{\text{opt}} \) maximizing the ratio of the between-class scatter to the within-class scatter.
$$ {\text{U}}_{\text{opt}} = \mathop {\text{argmax}}\limits_{\text{ w}} \left| {\frac{{\mathop {\mathop {\text{U}}\nolimits^{\text{T}} S}\nolimits_{\text{B}} {\text{U}}}}{{\mathop {\mathop {\text{U}}\nolimits^{\text{T}} S}\nolimits_{\text{W}} {\text{U}}}}} \right| $$
(5)
Finding the maximum \( {\text{U}}_{\text{opt}} \) could be tricky, but fortunately, it is known that the solution can be found in a relatively simple method.
$$ S_{\text{B}} {\text{U}} - QS_{\text{W}} {\text{U}} = 0 $$
(6)
where Q is known as a diagonal matrix and its elements are the eigenvalues. The column vectors of matrix U are eigenvectors corresponding to the eigenvalues.

2.3 Self-Organizing Feature Maps

The principal goal of SOM is to transform a signal pattern of arbitrary dimension into one- or two-dimensional discrete map and to perform this transformation accordingly in a topologically ordered fashion [17]. The output nodes are connected in an array (usually 1 or 2 dimensional). Randomly choose an input vector x and determine the winning unit of the output node.
$$ \left| {w_{i} - {\text{x}}} \right| \le \left| {w_{k} - {\text{x}}} \right|\forall k $$
(7)
The winning node weights are updated by
$$ w_{k} ({\text{new}}) = w_{k} ({\text{old}}) + \mu N(j,k)(x - w_{k} ) $$
(8)

Thus, units close to the winners as well as the winners themselves have their weights updated appreciably.

3 Finger Vein Authentication

The patterns of veins were extracted by combining two segmentation methods, which include morphological operation and maximum curvature points in image profiles. The finger vein patterns were acquired by passing near-infrared light through the finger vein. The result is an image of the unique patterns of veins, as dark lines can be captured by a sensor placed below the finger.

3.1 Feature Extraction and Matching

The finger vein features are extracted by repeated line tracking method. Let the initial value of the current tracking point of the pixel (xc, yc) is (xs, ys). Rf is the set of pixels within the finger’s outline, and Tr is the locus space. Dlr and Dud are the parameters that prevent the tracking point and are determined by
$$ D_{\text{lr}} = \left\{ {\begin{array}{*{20}c} {(1,0)} & { ( {\text{if }}R_{\text{nd}} (2) < 1)} \\ {( - 1,0)} & { ( {\text{otherwise)}}} \\ \end{array} } \right. $$
(9)
$$ D_{\text{ud}} = \left\{ {\begin{array}{*{20}c} {(1,0)} & { ( {\text{if }}R_{\text{nd}} (2) < 1)} \\ {(0, - 1)} & { ( {\text{otherwise)}}} \\ \end{array} } \right. $$
(10)
Rnd (n) is uniform random number between 0 and n
The detection of the dark line direction and movement of the tracking point is determined by the set of pixels Nc.
$$ N_{c} = T_{c} \cap R_{f} \cap N_{r} (x_{c} ,y_{c} ) $$
(11)
Nr (xc, yc) is the set of neighboring pixels of (xc, yc), selected as follows:
$$ N_{r} (x_{c} ,y_{c} ) = \left\{ {\begin{array}{*{20}l} {{\text{N}}_{{\text{3}}} {\text{(D}}_{{{\text{ir}}}} {\text{)(x}}_{{\text{c}}} {\text{,y}}_{{\text{c}}} {\text{)}}} \hfill & {{\text{(if R}}_{{{\text{nd}}}} {\text{(100) < p}}_{{{\text{ir}}}} {\text{)}};} \hfill \\ {{\text{N}}_{{\text{3}}} {\text{(D}}_{{{\text{ud}}}} {\text{)(x}}_{{\text{c}}} {\text{,y}}_{{\text{c}}} {\text{)}}} \hfill & {{\text{(if p}}_{{{\text{ir}}}} {\text{ + 1}} \le {\text{R}}_{{{\text{nd}}}} {\text{(100) < p}}_{{{\text{ir}}}} {\text{ + p}}_{{{\text{ud}}}} {\text{)}}} \hfill \\ {{\text{N}}_{{\text{3}}} {\text{(x}}_{{\text{c}}} {\text{,y}}_{{\text{c}}} {\text{)}}} \hfill & {{\text{(if p}}_{{{\text{ir}}}} {\text{ + p}}_{{{\text{ud}}}} {\text{1}} \le {\text{R}}_{{{\text{nd}}}} {\text{(100)),}}} \hfill \\ \end{array} } \right. $$
(12)
N3 (D) (x, y) is the set of three neighboring pixels of (xc, yc) whose direction is determined by the moving direction attribute D.
$$ N_{3} (D)(x,y) = \left\{ \begin{aligned} & (D_{x} + x,D_{y} + y),(D_{x} - D_{y} + x,D_{x} - D_{y} + y), \\ & (D_{x} + D_{y} + x,D_{y} + D_{x} + y) \\ \end{aligned} \right\} $$
(13)
Parameters plr and pud are the probability of selecting the three neighboring pixels in the horizontal or vertical direction. The line evaluation function reflects the depth of the valleys in the cross-sectional profiles around the current tracking point:
$$ V_{i} = \mathop {\hbox{max} }\limits_{{(x_{c} ,y_{c} ) \in N_{c} }} \left\{ \begin{aligned} & F(x_{c} + r\cos \theta_{i} - \frac{W}{2}\sin \theta_{i} ,y_{c} + r\sin \theta_{i} + \frac{W}{2}\cos \theta_{i} ) \\ & + F(x_{c} + r\cos \theta_{i} + \frac{W}{2}\sin \theta_{i} ,y_{c} + r\sin \theta_{i} - \frac{W}{2}\cos \theta_{i} ) \\ & - 2F(x_{c} + r\cos \theta_{i} ,y_{c} + r\sin \theta_{i} ) \\ \end{aligned} \right\} $$
(14)
Let W is the width of the profiles, r is the distance between (xc, yc) and the cross section, and θi is the angle between the line segments (xc, yc) − (xc + 1, yc) and (xc, yc) − (xi, yi). The current tracking point (xc, yc) is added to the locus position table Tc. The total number of times the pixel (x, y) has been in the current tracking point in the repetitive line tracking operation is stored in the locus space, Tr(x, y). Therefore, the finger vein pattern is obtained as chains of high values of Tr(x, y). The patterns of veins shown in Fig. 2 are extracted using repeated lines tracking and by iterative process every minute details of finger vein are taken into account.
Fig. 2

Extraction of finger vein patterns

4 Score-Level Fusion

The matching scores of palmprint recognition is obtained by finding the minimum absolute deviation for each palmprint image. The matching scores of finger vein recognition are normalized using Z-score normalization technique. The mean (μ) and standard deviation (σ) are estimated from a given set of matching scores.

The normalized scores are given by
$$ S_{K}^{{\prime }} = \frac{{S_{K} - \mu }}{\sigma } $$
(15)
The palmprint and finger vein scores are fused by weighted fusion method.
$$ S = w_{1} s_{1} + w_{2} s_{2} $$
(16)
Let s1 and s2 are palmprint matching score and finger vein matching score, w1 and w2 are the weights assigned to both the traits, and S is the fusion score (Fig. 3).
Fig. 3

The block diagram of the proposed multimodal biometric system

5 Experimental Results and Discussions

Experiments have been conducted on homogeneous multimodal database consisting of palmprint and finger vein images acquired from 50 subjects, and each subject has provided 5 palmprints and 5 finger vein images. Palmprint images are acquired using a high-resolution digital camera, and finger vein images are obtained using a CCD camera. The ROI of the center part of the palmprint image is extracted, and the image is compressed with 2D-DCT. The coefficients are reshaped and are fed to multi-class LDA. The output of multi-class LDA of different persons is made into single array and given as input to neural network. The image in the training database which is the closest match by the SOM neural network for the input palmprint image is found by finding the minimum absolute deviation as shown in Table 1.
Table 1

Absolute minimum deviation of palmprint images

No. of subjects

Minimum deviation

1

4

2

8

3

13

4

17

5

23

The vein patterns are extracted using repeated line tracking, and score is obtained to fuse the process. Normalization scores of finger vein images from different subjects are formulated in Table 2. After obtaining the scores from both the modalities, palmprint and finger vein traits are fused using score-level fusion.
Table 2

Normalization scores of finger vein images

No. of subjects/samples

1

2

3

4

5

1

74.5689

75.6421

75.2389

75.8999

74.1256

2

82.5671

82.9795

82.1256

82.0145

82.4789

3

86.2356

86.1005

86.0005

86.1856

86.1458

4

73.4566

73.0025

73.1255

73.0189

73.5809

5

90.1289

90.1478

90.1456

90.1236

90.1006

Table 3 shows false accept rate (FAR) and recognition rate determined for the proposed techniques. The score-level fusion of palmprint and finger vein images has the recognition rate of 98.5 % with 2 % FAR. The proposed multimodal biometric system overcomes the limitations of individual biometric systems and also meets the response time as well as the accuracy requirements.
Table 3

Recognition performance of the proposed system

Traits

FAR %

Recognition Rate %

Palmprint

6

94.5

Finger vein

4

96

Palmprint + Finger vein

2

98.5

6 Conclusion

The proposed method bypasses the need to perform score normalization and choosing optimal combination weights for each modality. In this sense, the proposed solution is a principled and general approach that is optimal when the matching score distributions are either known or can be estimated with high accuracy. Palmprint authentication is implemented using LDA and SOM for feature classification and 2D-DCT for image compression. Finger vein is authenticated by using repeated line tracking for feature extraction, and matching is done with the template created. Once identification is done, minimum deviation for palmprint and matching score for finger vein are calculated and are fused using scoring level. Error rate is reduced, and it provides accurate results.

References

  1. 1.
    X. Li, S. Guo, The fourth biometric—vein recognition. Pattern Recogn. Tech. Technolo. Appl. 24, 626 (2008)Google Scholar
  2. 2.
    D. Hejtmankova, R. Dvorak, A new method of finger veins detection. Int. J. Bio-Sci. Bio-Technol. 1, 11 (2009)Google Scholar
  3. 3.
    N. Miura, A. Nagasaka, Takafumi miyatake.: feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification. Mach. Vis. Appl. 15, 194–203 (2004)CrossRefGoogle Scholar
  4. 4.
    N. Miura, T. Miyataket, A. Nagasakat, Automatic feature extraction from non-uniform finger vein image and its application to personal identification. IAPR Worshop on Machine Vision Applications. (2002)Google Scholar
  5. 5.
    B. Huang, Y. Dai, R. Li, D. Tang, W. Li, Finger-vein authentication based on wide line detector and pattern normalization. International Conference on Pattern Recognition. (2010), pp. 1269–1273Google Scholar
  6. 6.
    I. Malik, R. Sharma, Analysis of different techniques for finger-vein feature extraction. Int. J. Comput. Trends Technol. (IJCTT). 4(5) (2013)Google Scholar
  7. 7.
    A.N. Hoshyar, R. Sulaiman, A.N. Houshyar, Smart access control with finger vein authentication and neural network. J. Am. Sci. 7(9) (2011)Google Scholar
  8. 8.
    J.H. Choi, W. Songa, T. Kima, S.-R. Leeab, H.C. Kim, Finger vein extraction using gradient normalization and principal curvature. Proceedings of SPIE, Image Processing: Machine Vision Applications II. (2009), pp. 7251Google Scholar
  9. 9.
    I.R. Muhammad, Multimodal face and finger veins biometric authentication. Sci. Res. Essays 5(17), 2529–2534 (2010)MathSciNetGoogle Scholar
  10. 10.
    G. Yang, X. Xi, Y. Yin, Finger vein recognition based on (2D)2 PCA and metric learnin. J. Biomed. Biotechnol. 2, (2012)Google Scholar
  11. 11.
    K. Krishneswari, S. Arumugam Intramodal feature fusion based on PSO for palmprint authentication. ICTACT J. Image Video Process. 02(04) (2012)Google Scholar
  12. 12.
    P. Tamil Selvi, N. Radha, Palmprint and Iris based authentication and secure key exchange against dictionary attacks. Int. J. Comput. Appl. 2 (11) (2010)Google Scholar
  13. 13.
    K. Krishneswari, S. Arumugam, Intramodal feature fusion using wavelet for palmprint authentication, Int. J. Eng. Sci. Technol. 3(2011)Google Scholar
  14. 14.
    D.V. Jadhav, R.S. Holambe, Radon and discrete cosine transform based feature extraction and dimensionality reduction approach for face recognition. Signal Process. 88, 2604–2609 (2008)CrossRefMATHGoogle Scholar
  15. 15.
    T. Connie, A. Jin, M. Ong, D. Ling, An automated palmprint recognition system. Image Vision Comput. 23 (2005)Google Scholar
  16. 16.
    X. Wu, D. Zhang, Fisher palms based palmprint recognition. Pattern Recogn. Lett. 24(15), 2829–2838 (2003)CrossRefGoogle Scholar
  17. 17.
    A. Hussein A. Al-Timemy, A robust algorithm for ear recognition system based on self organization maps. 1st Regional Conference of Engineering Science NUCEJ (special issue). 11(2) (2008)Google Scholar

Copyright information

© Springer India 2015

Authors and Affiliations

  • C. Murukesh
    • 1
  • K. Thanushkodi
    • 2
  • Padmanabhan Preethi
    • 1
  • Feroze Naina Mohamed
    • 1
  1. 1.Department of EIE, Velammal Engineering CollegeAnna University ChennaiIndia
  2. 2.Akshaya College of Engineering and TechnologyAnna UniversityCoimbatoreIndia

Personalised recommendations