Keywords

1 Introduction

The development and popularity of the Internet and computers, particularly electronic commerce, have rendered biometric-based automated human identification important and indispensable [1]. Vein recognition is an automated human identification technology based on vein patterns, which is the vast network of blood vessels under the human hand skin [25]. Compared with other biometric technologies, such as that using fingerprints [6, 7], palmprints [811], and iris [12], palmvein recognition [5, 29, 30] has the advantages of uniqueness and abundance of identity information, live body identification, and counterfeiting difficulties. These advantages confirm palmvein recognition as a promising and effective technology with high accuracy and wide application range.

Palmvein recognition can be divided into the following steps: palmvein image acquisition from capture device, pre-processing of region-of-interest (ROI) and image enhancement, feature extraction, and matching. Among these steps, palmvein image acquisition is the premise. For example, during acquisition, the invalidation of palm range, poor quality, and inconsistency of the acquired palmvein images increase the complexity of the ROI and the feature extraction algorithm, and even causes the failure of palmvein recognition. Therefore, the quality [13, 14], validity, and consistency of the acquired palmvein image play an important role in palmvein recognition.

Existing literature on palmvein recognition mainly focuses on recognition algorithms, such as ROI extraction, image enhancement, feature extraction, and matching [1530,41]. A few studies have focused on palmvein image capture device to improve the quality of palmvein image; the literature can be summarized into three research contents, namely, obtaining a palmvein image, designing the structure of the palmvein image-capture device, and choosing the near-infrared light source and image sensor [3135]. All palmvein image capture devices have similar major structures, including near-infrared light source, camera, and circuit modules.

The capture mode of palmvein capture devices can be separated into two modes, namely, contact and contact-less. The contact mode usually constrains the hand of the user using a fixed peg or carrier frame. For example, the capture mode in the contact device [11] is as follows: a user is asked to place his/her hand on the platform, and several pegs serve as control points for the placement of the hand of the user. The device in [17] is based on the fact that the hand is on the support, which is approximately at 20 cm of the camera lens; this condition can help the persons positioning their hands and limit translations and rotations. The contact mode can only ensure that the hand is the focus of the camera, but it cannot ensure that the hand is flat and does not shift during image acquisition. Therefore, the validity and quality of the acquired palmvein image cannot be guaranteed. For the contact-less mode, no guidance peripheral exists to constrain the hand of the user, mostly through manual adjustment. In a contact-less device [19], the user needs to monitor whether the hand is properly displayed inside the working volume. The user is also asked to place his/her hand approximately 20 cm above the camera and decide whether the palmvein image on the monitor is clear to acquire a valid image [26]. The contact-less mode may spend several seconds to several minutes to adjust the placement of the hand, but it cannot ensure the consistency of the palmvein images from multiple acquisitions because of the different positions and postures of the hand. The following aspects are crucial: (1) how to automatically detect whether the placement of the hand is appropriate, and (2) when to shoot the palmvein image. However, no study has addressed these issues. Palmvein image-capture devices remain at the laboratory stage with a large size and cannot be commercially available products, except those of Fujitsu Co [36].

As mentioned, several problems should be studied with regard to the capture device of palmvein identification system, such as the validity and consistency of the acquired palmvein images, as well as the fast and automatic acquisition process of palmvein images. We elucidate the issues as follows:

(1) Validity of the acquired palmvein images. Regarding the validity of images, the acquired palmvein image has a good quality and the entire palm with all five finger root points should be acquired. When selecting the ROI area from the palm center, we need to extract the palm profile and find two critical points (root between the index and middle fingers and root between the ring and pinky fingers) [23, 26]. To ensure the validity of the acquired palmvein images, the position and posture of the hand above the camera must be appropriate, that is, (a) the position of the hand is at the center of the camera; (b) the hand should pose as flat as possible, not tilt; (c) the hand should be placed at the focus of the camera. If the hand is not placed at the center of the camera, which means that the left, right, forward, or backward shift is excessive, then the acquired palmvein image becomes incomplete; thus, the palmvein image ROI cannot be addressed and the palm cannot be identified. If the hand is not flat, which means that the hand tilts leftward, rightward, forward, or backward at a certain degree or the palm center is excessively concave or convex (see Fig. 12), then deformation of the acquired palmvein image becomes significant. This phenomenon increases the complexity of the subsequent algorithm and even causes the failure of recognition. If the hand is placed before or after the focus of the camera, then the acquired palmvein image is scaled up or down (see Fig. 12). This phenomenon increases the distortion and reduces the quality of image, which significantly affects the recognition results. Therefore, the position and posture of the hand above the camera directly determines the quality of the acquired palmvein image, which further affects the accuracy of palmvein recognition.

(2) Consistency of the acquired palmvein images. Consistency means that the acquired palmvein images have the same image quality and the same palm area when a palm is captured in the registration and identification processes at different acquisitions. The position and posture of the hand above the camera have to be the same at different capturing instants to ensure the consistency of palmvein imaging. If the same palm is placed inconsistently in the registration and recognition phases, then differences are generated in image quality and the ROI, which increase intra-class variances. Although some algorithms can deal with small deformations, including characteristic shift invariance and rotation invariance, they cannot deal well with large deformation. An inconsistent image increases the false rejection rate (FRR) and false acceptance rate (FAR), which decreases recognition accuracy.

(3) Fast and automatic acquisition. The acquisition time of a palmvein image is defined as follows: the time at which the hand of the user is placed above the camera to the time of capturing a valid palmvein image by adjusting the appropriate position and posture of the hand. In a practical palmvein identification system, the acquisition must be completed within 3 s; otherwise, it is not acceptable. If the process is controlled manually, several seconds to several minutes are needed to adjust the placement of the hand until appropriate depending on the experience of the user with the device. Moreover, palmvein image-capture device cannot be available products.

To develop a real-time palmvein identification system with a fast and automatic capture device and guarantee the validity and consistency of palmvein images, exploring a suitable locating method of hand is indispensable for palmvein image acquisition. Three key issues are considered in this study. First is how to detect whether the position and posture of the hand of the user above the camera is appropriate, and when the capture device automatically should capture a valid palmvein image. Second, in practical application, the intervals of identification and registration may be a few months or a few years. Thus, the issue is how to guarantee the same position and posture of the hand above the camera in the identification and registration processes. Third is how to guide the placement of the hand with appropriate position and posture quickly and capture palmvein images automatically only when appropriate.

To solve the aforementioned problems, this study proposes a real-time locating method that can guide a user on how to place his/her hand appropriately and quickly above the sensor and capture an palmvein image automatically when appropriate. The framework of the proposed method is illustrated in Fig. 1. First, four light-emitting diodes (LEDs) emit near-infrared light to the palm under the control of ARM. The near-infrared light reflected by the palm is captured by a complementary metal–oxide–semiconductor (CMOS) sensor. Therefore, the image includes four light-spots. Second, all edges of the four light-spots are found on the basis of Freeman Chain Code of Eight Directions (FCCE) after binarization, and the center coordinates of the four light-spots are obtained. Third, the length and angle of the quadrangle that consists of the four centers of the four light-spots are calculated, and the placement validity is determined according to the length and angle. Finally, ARM is informed to capture the palmvein image automatically when the length and angle are within a setup value, or the user is reminded to adjust the position and posture of his/her palm.

Fig. 1.
figure 1figure 1

Framework of the proposed method

The main contributions of this study are as follows:

  1. (1)

    A simple and practical real-time locating method is proposed for the capture device of the palmvein identification system to realize palmvein image acquisition automatically and quickly as well as to guarantee the validity and consistency of the acquired palmvein images.

  2. (2)

    A simple electric and optical circuit is designed and implemented to capture an image with four light-spots. This image is used as the input of the following locating algorithm.

  3. (3)

    A locating algorithm is constructed to effectively detect whether the hand is being placed in an appropriate position and posture as well as to capture the palmvein image when appropriate.

The rest of this paper is organized as follows. Section 2 introduces the four-point locating electric and optical circuit based on a self-fabricated palmvein image capture device. Section 3 presents the proposed locating algorithm. Section 4 shows the experimental results. Section 5 summarizes this paper.

2 Four-Point Locating Electric and Optical Circuit

2.1 Palmvein Image Capture Device

The structural diagram and appearance of the palmvein image-capture device are shown in Figs. 2 and 3, respectively. The capture device consists of a light source module, a camera module, an ARM controller, and a digital signal processor (DSP). The light source module includes the main and subsidiary light sources. The main light source is made up of numerous near-infrared LEDs, which are circularly distributed in the middle of the light source board. This light source can cooperate with the camera module to capture the palmvein image when lighted. The subsidiary light source has four 750 nm near-infrared LEDs located at the corners of the light source board. This light source can coordinate with the camera to capture an image with four light-spots to determine if the hand is being placed appropriately. The camera module uses a CMOS image sensor and is responsible for capturing a palmvein image under the main light source and a four light-spots image under subsidiary light source. The ARM controller adjusts the intensity of LED and captures using a camera to complete the real-time collection. The DSP is responsible for locating algorithm, ROI of the palmvein image, enhancement, and feature extraction. In this study, we mainly focus on the four-point locating electric and optical circuit using a subsidiary light source of four 750 nm near-infrared LEDs.

Fig. 2.
figure 2figure 2

Structural diagram of the capture device

Fig. 3.
figure 3figure 3

Appearance of the capture device

2.2 Four-Point Locating Electric and Optical Circuit

On the basis of our palmvein capture device, four 750 nm near-infrared LEDs are placed on the corners of the light source board. The corresponding four-point locating circuit is shown in Fig. 4, which controls the lightening and intensity by ARM. The four-point optical circuit is shown in Fig. 5, which consists of four LEDs, light pipe, and condensing lens. The near-infrared light emitted by LEDs are converged and amplified by the condensing lens through the light pipe to form four light-spots on the palm. A CMOS image sensor captures the reflection from the palm to generate an image under the control of ARM, as shown in Fig. 7(a).

Fig. 4.
figure 4figure 4

Four-point locating electric circuit

Fig. 5.
figure 5figure 5

Four-point locating optical circuit

3 Four-Point Locating Algorithm

3.1 Flowchart of the Four-Point Locating Algorithm

When the image with four light-spots is acquired, it can be processed by the four-point locating algorithm, as shown in Fig. 6.

Fig. 6.
figure 6figure 6

Flowchart of the four-point locating algorithm

3.2 Four-Point Locating Algorithm

(1) Binarization

The original image with four light-spots is shown in Fig. 7(a). After binarization, the original image transforms into a binary image, as shown in Fig. 7(b). The binarization process adopts the adaptive threshold value method. The threshold value is equal to half of the maximum pixel values of the original BMP image.

Fig. 7.
figure 7figure 7

Binarization of the image with four light-spots

(2) Searching the edge points using FCCE

FCCE.

Freeman chain code is a method that describes a boundary or curve using the coordinates of the starting point of the curve and its direction code, which is a coded representation of the boundary. The Freeman chain code is coded by the boundary direction. This method is typically used to describe the boundary points to simplify the description of the boundary.

The Freeman chain code can be divided into Freeman Chain Code of Four Directions (FCCF) and FCCE based on the different numbers of adjacent directions of the center pixel. FCCF has four adjacent points in the directions of upper, lower, left, and right. Unlike FCCF, FCCE increases four adjacent points in the incline direction. FCCE fits in with the fact that each pixel has eight adjacent points in its neighborhood, which can precisely describe the information of the center point and its neighborhoods. Therefore, this study adopts FCCE to search the edge points (Fig. 8).

Fig. 8.
figure 8figure 8

FCCF and FCCE

Searching the edge points.

We search the edge points of four light-spots in the entire binarized image. First, the image is split into four quadrants, as shown in Fig. 9(a). We adopt a traversal search for the edge points in the directions of upper left, upper right, lower left, and lower right of the original point using FCCE. The spot can be viewed as a closed image. If the edge point we searched is the first edge point, then the search process ends. If we have found all the edge points of one spot, then the search in this quadrant ends. When the search process ends, all edges of four light-spots can be obtained. We can draw four circles by the searched chain code of all edges, as shown in Fig. 9(b).

Fig. 9.
figure 9figure 9

Searching for the edge points using FCCE

(3) Calculating the length and Angle

On the basis of the four circles, we can calculate the coordinates of their center points. A quadrangle can then be obtained using the center points of the four circles. Figure 10 shows the quadrangle that consists of four center points. The calculation procedure of the length and angle of the quadrangle is elucidated as follows:

Fig. 10.
figure 10figure 10

Quadrangle consisting of four center points

We suppose that A,B,C, and D are the positions of the center points of the four circles. (x1,y1), (x2,y2), (x3,y3), and (x4,y4) are the coordinates of the four corresponding points. LAB, LCD, LAC, and LBD are the distances between two points, that is, the four side lengths of the quadrangle. \( \alpha_{1} \), \( \alpha_{2} \), \( \alpha_{3} \), and \( \alpha_{4} \) are the angles between two lines. The length of the side length LAB can be calculated by the Euclidean distance of points A and B, as shown in Eq. 1. Similarly, the lengths of LCD, LAC, and LBD can be calculated.

$$ L_{AB} = \sqrt {(x_{1} - x_{2} )^{2} + (y_{1} - y_{2} )^{2} } $$
(1)

kAB, kCD, kAC, and kBD represent the slopes of LAB, LCD, LAC, and LBD, respectively. kAB and kAC can be calculated using Eqs. (2) and (3). kCD and kBD can be calculated as well.

$$ k_{AB} = \frac{{y_{2} - y_{1} }}{{x_{2} - x_{1} }} $$
(2)
$$ k_{AC} = \frac{{y_{3} - y_{1} }}{{x_{3} - x_{1} }} $$
(3)

The angle \( \alpha_{1} \) between the lines AB and AC can be obtained using Eq. (4) (when \( \alpha_{1} \) does not equal 90°). \( \alpha_{2} \), \( \alpha_{3} \), and \( \alpha_{4} \) can also be calculated.

$$ \alpha_{1} = Arctg\frac{{k_{AB} - k_{AC} }}{{1 + k_{AB} k_{AC} }} $$
(4)

(4) Judging criteria for detecting hand placement

When the four lengths and four angles of the quadrangle are calculated, whether the position and posture of the hand is appropriate can be determined according to Eqs. (5)-(9). The detection flowchart is shown in Fig. 11.

Fig. 11.
figure 11figure 11

Detection flowchart of determining hand placement

$$ L_{CD} - C_{1} \le L_{AB} \le L_{CD} + C_{1} $$
(5)
$$ L_{BD} - C_{2} \le L_{AC} \le L_{BD} + C_{2} $$
(6)
$$ {\text{Min }}\left( {{\text{L}}_{\text{AB}} ,\,{\text{L}}_{\text{CD}} ,\,{\text{L}}_{\text{AC}} ,\,{\text{L}}_{\text{BD}} } \right){\text{ > C}}_{3} $$
(7)
$$ {\text{Max}}\left( {{\text{L}}_{\text{AB}} \,,{\text{L}}_{\text{CD}} ,\,{\text{L}}_{\text{AC}} ,\,{\text{L}}_{\text{BD}} } \right)\,{ < }\,{\text{C}}_{4} $$
(8)
$$ 90^\circ - \sigma \le \alpha_{i} \le 90^\circ + \sigma (i = 1,2,3,4)$$
(9)

where C1, C2, C3, and C4 are constants for pixels; \( \sigma \) is the constant for angle.

First, we should determine whether the palm inclines backward or forward according to Eq. (5). When LCD − C1 > LAB, LAB is short, and the front of the palm is far away from the camera, which indicate that the palm inclines forward. When LCD + C1 < LAB, LAB is long, and the rear of the palm is far away from the camera, which indicate that the palm inclines backward. Subsequently, we can determine whether the palm inclines left or right according to Eq. (6). When LBD − C2 > LAC, the palm inclines to the left. When LBD + C2 < LAC, the palm inclines to the right. When the quadrangle satisfies Eqs. (5) and (6), the palm is posed horizontally. However, the palm just being flat is not enough. The distance between the palm and the camera should be appropriate, which can be determined using Eqs. (7) and (8). When Eq. (7) is satisfied, the shortest side in the quadrangle is larger than the predefined threshold, and the distance between the palm and the camera is too small. When Eq. (8) is satisfied, the longest side in the quadrangle is smaller than the predefined threshold, and the distance between the palm and the camera is too large. Finally, the differences between the four angles in the quadrangle and 90° should be examined according to Eq. (9), that is, whether the palm is convex or concave.

Figure 12 shows the images with four light-spots and the corresponding palmvein images in the positions of appropriate, forward, backward, left, right, convex, concave, far and near. The resolution of these images is 640 × 480 8-bit gray-level BMP files.

Fig. 12.
figure 12figure 12

Images with four light-spots and the corresponding palmvein images

4 Experiments and Discussion

Three experiments are conducted to prove the validity, consistency, and fast acquisition of the acquired palmvein images using the proposed method on the basis of recognition accuracy, matching score, and acquisition time of the palmvein image.

In this study, the proposed locating algorithm, ROI, and enhancement of palmvein image, as well as feature extraction are implemented using standard C in ARM (500 MHz) and DSP (240 MHz). Matching and the display program are developed using Visual Studio 2012 on a standard PC, which is equipped with a Core i5-3470 CPU 3.2 GHz and 4 GB RAM.

4.1 Palmvein Image Capture Steps

The palmvein image acquisition process based on the proposed method is described as follows. First, whether the hand of the user is above the camera is determined using four 750 nm near-infrared LEDs. Second, whether the position and posture of the hand are appropriate is detected using the proposed locating algorithm when the hand is placed above the camera. Third, the main illumination LEDs emit near-infrared light to the palm only when the hand is posed appropriately, and the palmvein image is captured by the camera. Fourth, the palmvein image is saved and transmitted for the subsequent identification process.

On the contrary, the steps without the proposed locating method are as follows. First, the main illumination LEDs always emit near-infrared light to the palm, and the camera continually takes shots to acquire a valid palmvein image. Second, the ROI process is performed on each palmvein image; the capture process is repeated if it fails to extract ROI from the palmvein image. Third,if the ROI is successfully extracted, then the capture process ends. Fourth, the palmvein image is saved and transmitted for the subsequent identification process.

4.2 Evaluation of Validity by Recognition Rate

The capture device based on the proposed method can acquire a palmvein image in contact or contact-less mode. The capture modes are shown in Fig. 13. The contact mode (see Fig. 13(a)) supports the hand using a bracket. A total of 1,224 images are acquired from 51 volunteers as self-built database. The images in this database are captured in two sessions (six images in each session), with an average interval of 30 days between the sessions.

Fig. 13.
figure 13figure 13

Capture modes of palmvein image in this study

To demonstrate the validity of palmvein images in the self-built database, comparison experiments by recognition accuracy are performed between the self-built database and two open databases. One database is the PolyU Multispectral Palmprint Database (PolyU database) [37] in which all 6,000 images are acquired from 250 volunteers using a constrained device with finger-pegs in two sessions (six images in each session), with an average interval of nine days between the sessions. The other database is the CASIA Multi-Spectral Palmprint Database [38] in which all 1,200 images are acquired from 100 volunteers using a contact-less device in two sessions (three images in each session), with an average interval of one month between the two sessions.

The matching of the same session data tends to achieve better results than that of a different session because of small variations, thereby leading to an unreliable estimation. Therefore, the samples from the first session become the database samples, and the remaining images become the test samples. First, the input palmvein image should be extracted ROI and enhanced [39, 40], by which we can obtain a 128 × 128 enhanced palmvein ROI image. The ROI is then identified using the method in [39]. The identification experimental results are shown in Table 1. The recognition rate in the PolyU database is 99.83 % with an equal error rate (EER) of 0.14 %; the recognition rate in the CASIA database is 99.50 % with an EER of 0.67 %; and the recognition rate in the self-built database is 99.91 % with an EER of 0.09 %.

Table 1. The rank-1 identification rate and EER on three databases

Table 1 indicates that the self-built database can obtain a high recognition rate and a low EER. Thus, the validity of the palmvein image in the self-built database is good for the identification process.

4.3 Evaluation of Consistency by Matching Score

10 volunteers (with 20 palms; 10 men and 10 women) participate in the experiment. Two contact palmvein images are acquired under appropriate hand placements, and 10 contact-less palmvein images are acquired under appropriate (with 2 images), fore, back, left, right, far, near, convex, and concave hand placements. A total of 240 palmvein images are obtained, with 12 palmvein images for each palm. We use a contract palmvein image or a contract-less palmvein image under the appropriate placement as registration samples. The other 11 images serve as test samples. The matching score can be obtained by the methods in [39, 40], a part of them is shown in Table 2 (4 persons with 8 palms). This table indicates that the matching score (including the comparisons between contact and contact, between contact and contact-less, and between contact-less and contact-less) is higher when the hand of the user is posed under the appropriate placement, and the matching score decreases significantly under other inappropriate placements. Therefore, the different positions and postures in the registration and identification phases affect the consistency of palmvein images, which leads to the decrease in the scores of intra-class palmvein images and the increment in FAR and FRR, that is, the recognition performance worsens.

- fail to extract ROI

Table 2. Matching score in different placements

4.4 Evaluation of Acquisition Time

The acquisition time of a palmvein image is the time when the hand of the user is placed above the camera to the time that a valid palmvein image is captured by adjusting the appropriate position and posture of the hand. Ta and T’a represent the acquisition times of a palmvein image with and without the proposed method, respectively; Tp, Te, and Tm represent the response times of the pre-processing, feature extraction, and one-to-one matching, respectively. The consumed times for the registration and identification processes are determined as follows.

The registration time:

$$ T_{registation} = T_{a} + T_{p} + T_{e} \,\,{\text{or}}\,\,T_{registation} = T^{\prime}_{a} + T_{p} + T_{e} $$
(10)

The identification time:

$$ T_{identification} = T_{a} + T_{p} + T_{e} + T_{m} \,\,{\text{or}}\,\,T_{identification} = T^{\prime}_{a} + T_{p} + T_{e} + T_{m} $$
(11)

Given that Tp, Te, and Tm do not depend on the proposed method, the registration time Tregistration and the identification time Tidentification are restricted by the acquisition times Ta and T’a.

For the contact mode, considering that the user is asked to place his/her hand on the bracket, an insignificant difference exists between Ta and T’a. For the contact-less mode, the acquisition time Ta is within 3 s in most cases, and T’a is approximately 10 s to 120 s depending on the experience of the user.

Therefore, the proposed locating method not only shortens the acquisition time and realizes image capture automatically but also ensures the validity and consistency of the acquired palmvein image. The capture device cannot be productized without the proposed method because the judging experience of users cannot be embedded in the device.

5 Conclusion

To meet the demands for the practicability and productization of the capture device in the palmvein identification system, we propose a real-time detecting and locating method for palmvein image acquisition. The method ensures the validity and consistency of the acquired palmvein images and automatically captures the image fast. On the basis of a simple electric and optical circuit and a locating algorithm, we can detect whether the hand is placed in appropriate position and posture according to the acquired image with four light-spots. The palmvein image acquired by this method satisfies the requirements of subsequent processes, such as ROI, feature extraction, and matching. Experimental results show that the proposed method is practical and meets the demand for productization.