1 Introduction

Even though biometric identification systems have become reality and are no longer science-fiction visions, only several modalities have been widely deployed and such systems still have many drawbacks. The most known and often used modalities are fingerprints, face, hand geometry and iris. These are widely deployed in large-scale systems such as border control and biometric passports. But due to the problems with large-scale scalability, security, effectiveness and, last but not least, user-friendliness and social acceptance, new emerging modalities are still needed.

Moreover, most current biometric systems and deployments are not passive nor restrictions-free. For the image based system to work properly, lots of conditions usually have to be fulfilled. Users are requested to touch devices (such as plates with pegs in hand and palm state of the art biometric systems) or stand in certain distance to cameras in specified lightning conditions. In order to gain large acceptance, the biometric systems should work in a seamless way in the unconstrained environment (not imposing any requirements and limitations on users). For users, but also for the system integrators and operators, the cost and usage of widely accepted devices is also crucial.

Contactless biometrics answers postulates of seamless, unconstrained and low-cost systems. Moreover, contactless biometrics addresses user needs since they do not like to touch acquisition devices. Furthermore, the usage of mobile devices (cellphones, androids, handhelds) for biometrics is interesting both for users and system/service providers (due to low cost, wide acceptance and penetration, mobility and user-friendliness).

Our methods can be used in mobile biometrics scenario since mobile end-terminals portfolio has exploded with devices providing greater functionality and usability with more processing power on board. Biometric human identification using contactless unsupervised images will very soon become emerging application.

Therefore, hereby we propose palmprint and knuckle biometric methods designed for contactless biometric identification using handheld devices and mobile phones equipped with cameras (sample images are shown in Fig. 1).

Fig. 1
figure 1

Examples of palmprint images acquired by mobile phone camera

The proposed methods analyze palm and knuckle texture from images acquired by mobile devices.

The paper is structured as follows: in Sect. 2 the related work in palmprint and knuckles human identification is overviewed. In Sect. 3.1 palmprint segmentation method from photos acquired by mobile phones (with no restrictions) is presented. In Sect. 3 palmprint feature extraction methodology is presented. Feature extraction method designed for mobile devices is proposed in Sect. 3.2. In Sect. 4 knuckles feature extraction algorithms are described. Then results for palmprint and knuckles biometrics are shown in Sects. 5.1 and 5.2, respectively. Conclusions are given thereafter.

2 Related work

There are already several palmprint based methods that showed high accuracy in representing the identity of each individual [13].

Palmprint identification methods can be divided into three main groups: methods based on texture features, palm shape features and hybrid engaging both texture and shape information. Among them the code-based approaches give promising results.

In [4] authors proved that code-based methods have high recognition precision while small size of features allowing for fast feature extraction and matching. Recently, two coding schemes have been reported as having very good performance. These are competitive coding [5] and ordinal coding [6] schemes.

In [5] authors additionally stressed the fact that most of code-based approaches suffer from lack of multi-scale characteristics of domain palm lines. Therefore, they proposed to model the problem as spares code learning, refraining from typical convolution approaches used to filter coefficients evaluation.

Similar problem is presented in [7], where authors introduced a novel approach which engages correlation filters per class, which give sharp peak for instances belonging to learning class and noisy output otherwise. In [8] authors aimed at improving hand-geometry-based approaches by employing discretization of features using entropy-based heuristics. The results showed that this approach allows to increase the overall effectiveness.

However, engaging both hand-geometry and texture features seems a promising strategy as it was proved in [9]. In [10] authors presented methodology for combining palmprint texture features with palmprint polygonal shape features that resulted in better identification results than for texture features only.

Nowadays, most hand and palmprint biometric systems are supervised and require contact with the acquisition device. Currently, only few studies have been devoted to unsupervised, contactless palm images acquisition and hand pose invariance [11, 12]. In [13] authors proposed a system that uses color and hand shape information for hand detection process. Authors also introduced a new approximated string matching techniques for biometric identification, obtaining promising EER lower than 1.2%. In [14] authors proposed sum-difference ordinal filters to extract discriminative features, which allow to verify the palmprint identity in less than 200 ms, without losing the high accuracy. Such fast feature extraction algorithms are dedicated for smart phones and other mobile devices.

Hereby, we propose to use palmprint in the contactless biometric system for mobile devices (unsupervised, uncontrolled image acquisition by mobile cameras). To achieve such goal, the proposed methods have to be not only effective, but also computationally robust to be applied on mobile devices.

In the proposed palmprint biometric system, we are using our own palmprint database that contains pictures of right hands. Each of these images is preliminary processed to extract the most relevant palmprint features (wrinkles, valleys, life line). Then the squared palmprint region of interest is extracted and used to compute properties of the texture. A set of three-valued functions is created. Each of those functions is correlated with the palmprint to obtain the coefficient values. Each coefficient stands for single element in the final feature vector.

Moreover, for the system to be more effective and multimodal, we also tested the possibility of using knuckles for human identification. Knuckles are relatively new and emerging biometric modality that could enhance hand-based biometric systems [15, 16, 19]. We proposed to use knuckle texture feature extraction methods and we achieved promising results.

Knuckle is a part of hand, and therefore, is easily accessible, invariant to emotions and other behavioral aspects (e.g. tiredness) and most importantly is rich in texture features which usually are very distinctive. Knuckle biometrics methods can be used in biometric systems for contactless and unrestricted access control e.g. for medium-security access control or verification systems dedicated for mobile devices (e.g. smartphones and mobile telecommunication services).

The sample knuckle image from IIT Delhi Database is presented in Fig. 2 (http://webold.iitd.ac.in/∼biometrics/knuckle/iitd_knuckle.htm).

Fig. 2
figure 2

Sample knuckle images from IIT Delhi Database (http://webold.iitd.ac.in/∼biometrics/knuckle/iitd_knuckle.htm)

Even though knuckle biometrics is relatively unknown and new modality, there are already some feature extraction methods and results published. So called KnuckleCodes have been proposed and other well known feature extraction methods such as DCT , PCA, LDA, ICA, orientation phase, Gabor filters have been investigated with very good identification results [1622].

Hereby, we propose to use texture feature extraction methods such as Probabilistic Hough Transform (PHT) and Speeded Up Robust Features (SURF) and the original 3-step classification methodology [23, 24].

3 Contactless palmprint biometrics

3.1 Palmprint segmentation and extraction

Acquired palmprint images (Fig. 1) need to be pre-processed in order to perform successful palmprint extraction process. Firstly the skin color is detected. This procedure allows to reduce influence of unwanted elements (such as reflection) in the background on proper palmprint detection process. The skin detection is based on the following set of conditions: (RGB) is classified as skin if R >95 and G >40 and B >20 and max(RGB)min (RGB) >15 and |R − G| >15 and R > G and R > B.

This approach resulted in correct skin detection for all the images.

After skin detection procedure the image is gently blurred to obtain softer extracted region’s edges. Then the image is binarized to separate the palm from the background and to label palm as 1 and background as 0.

After preliminary processing, we applied the algorithm to find the most significant points of the palm. Sample result of palm significant points detection is presented in Fig. 3.

Fig. 3
figure 3

Palm significant points detection

The P.0 point is the closest pixel of palm region to the top edge of the image. The next points marked as P.1, P.2, and P.3 are found by moving along the palm edge, starting from the point P.0. The criterion to mark these points as significant is the local minimum of the analyzed pixels distance to the bottom edge of the picture. The points P.5, P.6 and P.7 are found by detecting the first background pixel on line L3, L4, and L2, respectively. The line marked as L1 is created from points P.1 and P.4, and it is used as a reference to find the other lines (L2, L3 and L4). The lines L2 and L3 are found by rotating the line L1 by 30° and 60°, respectively, using P.1 as pivot point. The line L4 is found by rotating the line L1 by 60° using P.4 as pivot point.

Detected significant points mark the area of palmprint (all the points excluding P.0). To solve a problem of palm rotation, we implemented the procedure to find the angle of rotation and to apply new rotation in the opposite direction. The result of rotation elimination procedure is shown in Fig. 4.

Fig. 4
figure 4

Rotation elimination procedure

Such pre-processed image is the input of our palmprint extraction algorithm. Hereby, we use our original methodology, in which square-shape palm detection is merged with polygon-shape palm detection. Marked points are used to extract the palmprint of the polygonal and rectangular shape. The results of our palmprint detection algorithm are presented in Fig. 5.

Fig. 5
figure 5

Polygonal and rectangular palms

The rectangular palmprint extraction algorithm is based on the information gained during the preliminary processing phase (palmprint rotation angle and position of the points P.1 and P.4) and its result is presented in Figs. 6, 7.

Fig. 6
figure 6

Rectangular palmprint extraction approach

Fig. 7
figure 7

Significant points of the palmprint geometry

In our previous work, we combined rectangular palm features with polygon palm shape features [10]. In this work, designed for mobile phones and handhelds, we use only rectangular palms as the input for further feature extraction steps.

3.2 Feature extraction method for mobile devices based on three-valued base functions

Nowadays, most hand and palmprint biometric systems are supervised and require contact with the acquisition device. But, hand features, palmprint and knuckles can be used in passive and contactless biometric system.

In the paper two-dimensional discrete functions are proposed to construct base of vectors \(\{ v_1, v_2, v_3,\ldots,v_N \}\) that will be used to project each of palmprint \(\{ p_1, p_2, p_3,\ldots,p_K \}\) on to the new feature space, where K is the number of images that build the learning data set, and N is the number of masks that will be used for the projection.

In other words, we answer the question how much the k th plamprint is similar to v k by computing the projection coefficients a kn . This is achieved by computing the dot product of v n and palmprint p k . The formula is described by Eq. 1.

$$ a_{kn} = ( p_k \cdot v_n ) $$
(1)

Each of projection coefficients creates the final feature vector described by Eq. 2. The w k vector is used to represent single palmprint image and it is stored in the database.

$$ w_{k} = ( a_{k1}, a_{k2}, a_{k3},\ldots, a_{KN}) $$
(2)

The length of each vector w k is constant and strictly connected to the size of the v vectors set. The two-dimensional masks are three-valued (−1, 0, 1) functions. Some examples of these masks are shown in Fig. 8.

Fig. 8
figure 8

Examples of 5 × 3 three-valued masks

The idea of the three-valued masks refers to Haar-like functions proposed by Viola and Jones [25]. The advantage of such functions is the fact that those can be computed in the near real-time thanks to the integral images.

In this paper we decided to use three valued masks (instead of two-valued Haar-like functions), since we noticed that palmprint images contain not only very bright or dark features, but also gray areas where texture varies very slightly.

Each of the masks, shown in Fig. 8, can be described by the two-dimensional matrix. The example is shown in Fig. 9.

Fig. 9
figure 9

Representing the mask by 2D matrix

3.3 Methods for generating mask functions

The biggest advantage of the three-valued masks is that those can be computed in near real time using integral images.

However, the major problem with the proposed approach, is to choose the appropriate set of masks describing significant features of the palmprint. The second problem is to choose the appropriate size of the masks.

It was expected that masks with low resolution (9 × 9 and less) describe low frequency features such as valleys, while masks with high resolution describe high frequency properties (wrinkles or position and shape of life line).

To solve the task of choosing the appropriate masks, three strategies were investigated:

  • Masks are generated randomly (3.3.1),

  • masks are built by human (3.3.2),

  • masks are generated using eigen-palms, that were achieved after palmprints PCA decomposition (3.3.3).

3.3.1 Random masks

This strategy is the simplest of all the proposed above. It is based on the following algorithm:

  1. 1.

    Define the upper and lower masks resolution boundaries.

  2. 2.

    Define the number of masks to be created.

  3. 3.

    Define how many non-zero values the masks should have.

  4. 4.

    Generate random size of the two-dimensional matrix.

  5. 5.

    Set all the positions in the matrix to zero.

  6. 6.

    Choose (randomly) some position in the matrix and set it (randomly) to −1,0 or 1.

  7. 7.

    Repeat the step 6 till condition specified in step 3 is satisfied.

  8. 8.

    Repeat steps from 4 to 7 till the condition specified in step 2 is satisfied.

  9. 9.

    Generate feature vectors for each palm.

  10. 10.

    Compute FAR, FRR and classification error.

  11. 11.

    Repeat steps 2–10 till satisfactory FAR and FRR are achieved.

3.3.2 Manually selected features

In this step several people were involved to generate the feature masks. Each person was asked to label the very dark area as −1, the bright area as 1, and the rest of the palmprint area as 0. Each person was also free to decide about the masks resolutions. The GUI shown in Fig. 10 was created for users convenience.

Fig. 10
figure 10

The “mask creator” application and the 3D mask representation

Each person was responsible for generating at least 50 masks. The process was repeated several times to achieve several sets. Each set was used to create feature vectors. Each set of feature vectors was tested against FAR, FRR and classification error values to choose the best one.

3.3.3 Eigen-palms extraction

In this strategy the principal component analysis (PCA) is employed to produce the eigen-palms from learning data set [26].

PCA is a statistical technique that is successfully adopted to solve such tasks as face recognition, image compression or finding patterns in data of high dimensionality. In this method, it is assumed that variance implies importance, what is found useful from our point of view (we have grayscale palmprints with varying luminance).

Firstly, the mean palmprint is computed from the learning data as an essential part needed to build the covariance matrix of palmprints. Then the matrix is decomposed to eigenvectors and eigenvalues. The eigenvectors are also called eigen-palms. The eigenvalues give information about how important is the role of each eigenvector (eigen-palmprint) in the covariance matrix (the greater the value, the more important is the particular eigenvector).

In Fig. 11, there are shown sample eigen-palms created by PCA decomposition of the learning data set. The eigen-palms (with the highest eigenvalue) were chosen as reference for creating the three-valued masks.

Fig. 11
figure 11

Examples of eigen-palms with the highest eigenvalue

In Fig. 12, there are examples of feature masks created from the reference eigen-palms. The masks dimensionality was changed during several experiments to find the relation between its size and system effectiveness (Fig. 13).

Fig. 12
figure 12

Examples of 8 × 8 masks created from eigen-palms used as reference

Fig. 13
figure 13

Eigen-palm and its approximations (mask’s dimensionality of 4 × 4, 8 × 8, and 12 × 12, respectively)

Fig. 14
figure 14

Examples of SURF points detection for knuckles images

4 Knuckle biometrics methodology

In order to increase system identification robustness, we propose to use the multimodal approach in which palmprint texture features are merged with knuckles texture features.

However, due to the lack of palmprint and knuckle images for the same set of subjects, we decided to use IIT Delhi Knuckle Database to evaluate our approach [18, 19].

Firstly, the knuckle image is obtained from individuals requesting access to the system. The knuckle image is preliminary processed to gain the characteristic features. The preprocessing includes both edge detection and thresholding. The image is further analyzed by means of PHT, which is used both for determining the dominant orientation and for building the basic feature vector. We also calculate enhanced feature vector using PHT output giving the input for final classifier which uses the SURF texture features. Then the 3-step classification methodology is applied (in a broad-narrow manner). For computed “basic feature vector” nearest neighbors yielding the shortest Euclidean distance are chosen. For each image in kNN set the complex feature vectors are compared. The approach with kNN allows to decrease the complex computations without losing the overall system effectiveness (as discussed in details in Sect. 5).

4.1 Preprocessing—lines extraction

The most noticeable knuckle texture features are the lines and wrinkles located on bending area of finger joints (see the first row in Fig. 17).

Therefore, in our methodology we focused on extracting those lines. Firstly, the image is binarized using an adaptive threshold estimated by means of Eq. 3:

$$ T = \mu - \frac{\delta}{6}, $$
(3)

where T indicates the threshold value, μ the mean value and δ the standard deviation. Both the mean value and the standard deviation are computed locally in blocks of 7 × 7 pixels.

The result of adaptive thresholding is shown in Fig. 15b. It can noticed that such an image is quite noisy, since some edges suffer from line discontinuities while the background is filled with small spots. This problem is solved by adapting the PHT.

Fig. 15
figure 15

The knuckle image example (a), enhanced major lines after thresholding (b), and the lines detected by PHT (c)

PHT is a modification of classical Hough transform (HT). It was first introduced by Matas et al. [23]. PHT tries to minimize the computation requirements by repeatedly selecting a random point (in contrary to classical HT point-by-point approach) for voting. For each selected pixel the accumulator is updated. If the value is higher than predefined threshold, then line is searched in corresponding direction and pixels are successively removed from input image. The procedure is repeated until input image is not empty [27, 28].

Later, we also use PHT to extract the dominant orientation and build the “basic feature vectors”.

4.2 Basic features

The basic feature vector describing the knuckle texture is built using the PHT output information, which contains set of line descriptors represented by formula 4, where LD i (N) stands for N-th line descriptor of i-th image, (b x b y ) are the Cartesian coordinates of line starting point, (e x e y ) are the Cartesian coordinates of line end point, θ is the angle between the line normal and the x-axis, and d is the particular line length expressed in pixels.

The number of extracted lines (N) depends strictly on knuckle spatial properties and varies. Therefore, it is not directly used to build feature vector.

$$ {\rm LD}_i(N) = [b_{xN},b_{yN},e_{xN},e_{yN},\theta_N , d_N] $$
(4)

Due to the fact that the particular knuckle may be rotated, the dominant orientation based on Hough transform is extracted using the θ angle from the line descriptors. It is used to rotate the analyzed image in opposite direction to align the dominant line perpendicular to y-axis. After that the y position of particular line and its length is used to build the feature vector. The 30-bins 1D histogram is adapted as it is shown in Fig. 16. Such approach is based on the fact that the longest and characteristic lines of knuckle are concentrated around one rotation angle (as proved in Fig. 18).

Fig. 16
figure 16

The basic feature vector is built using the PHT output and the 1D histogram

Fig. 17
figure 17

Sample knuckle images and their representation after applying PHT transform

Fig. 18
figure 18

Line rotation histogram. The x-axis indicates the line rotation while the y-axis indicates the cumulative lines length

The vectors described in this section were named “basic” since these are relatively short (one row vector of length 30) and are used for general data set clustering to decrease the number of computations and comparisons of complex features vector in further phases of our human identification system.

4.3 Knuckle line model obtained from PHT

The set of line descriptors (Eq. 4) obtained from Hough transform is converted to image representation giving input for matching algorithm as it is shown in Fig. 19.

Fig. 19
figure 19

During the procedure of PHT models matching a map of distances is generated and the closest match is chosen

Both query and template images (chosen from kNN selected by basic feature classifier) are transformed and compared using the Euclidean metric. The output of matching block is the scoring map, which is of size 30 × 30. This size is determined by search ranges. In this case, the template image is offset in 〈− 15, 15〉 range both in x and y dimension as is it defined by formula 5, where i and j define the element in scoring map, H and W define query image width and height, while q and t represent query and template images, respectively.

$$ {\rm score}(i,j) = \sum_{x=0}^{W}\sum_{y=0}^{H}(q(x,y)-t(x+i,y+j))^2 $$
(5)

The lowest score (the shortest distance) is selected. It gives the information about how the query image is similar to the template, and allows to handle offsets in knuckle images. This step is necessary, since the knuckle database images are acquired using peg-free method (http://webold.iitd.ac.in/∼biometrics/knuckle/iitd_knuckle.htm).

Then five images from kNN set yielding the lowest score are chosen as the input for SURF-based classifier.

4.4 SURF features

The SURF stands for SURF. It is robust image detector and descriptor. It was firstly presented by Herbert Bay in 2006 [24]. Nowadays, it is widely used in object recognition and 3D reconstruction.

The key-point of the SURF detector is the determinant of the Hessian matrix, which is the matrix (Eq. 6) of partial derivatives of the luminance function.

$$ \nabla^2f(x,y) = \left[ \begin{array}{ll} \frac{\partial^2f} {\partial x^2}&\frac{\partial^2f} {\partial x \partial y} \\ \frac{\partial^2f} {\partial x \partial y}&\frac{\partial^2f} {\partial y^2} \end{array} \right] $$
(6)
$$ {\rm det}( \nabla^2f(x,y) ) = \frac{\partial^2f} {\partial x^2}\frac{\partial^2f} {\partial y^2} - \left( \frac{\partial^2f} {\partial x \partial y} \right)^2 $$
(7)

The value of the determinant (Eq. 7) is used to classify the maxima or minima of the luminance function (second order derivate test). In the case of SURF, the partial derivatives are calculated by convolution with the second order scale normalized Gaussian kernel. To make the convolution operation more efficient, the Haar-like functions are used to represent the derivate.

If the determinant value is greater than the threshold (estimated during experiments on learning data set), then it is considered as fiducial point. The greater the threshold is, the less points (but strong ones) are detected. For each of the fiducial points the texture descriptor is calculated.

In our approach, we use the SURF points to find the closest match (if any) between query image and the templates selected by PHT-based classifier. Firstly, the points yielding the Hessian determinant value greater than the threshold are selected for both query and template images resulting in two points data set. Basing on texture descriptors, the matching pairs between those sets are found and the outliers (points in one data set that do not have representatives in the second data set) are removed. Then the matching cost between those sets is estimated using Eq. 8:

$$ m_{\rm cost} = \sum_{i=0}^{N}d\left(p_i - \frac{1}{N}\sum_{j=0}^Np_j ,q_i - \frac{1}{N}\sum_{j=0}^Nq_j\right), $$
(8)

where Ndp and q represent the number of matching pairs, Euclidean distance, point from template image and point from query image, respectively. Example of such mapping is shown in Figure 20.

Fig. 20
figure 20

Detected fiducial SURF points for queering image and their corresponding matches for the template image

4.5 Classification

Hereby, we propose classification methodology that consists of 3 steps. Firstly, 50 images are selected on the basis of the basic feature vector. Then 5 images are selected on the basis of PHT feature vector. Finally, SURF feature vector is used to select 1 image.

When basic feature vector is computed for the particular knuckle image, it is looked up in database to find k nearest neighbors yielding the nearest Euclidean distance. The k number was determined empirically as compromise between system effectiveness and system performance. Figure 21 shows that the classification error is decreasing significantly when the number of neighbors (k) is increased. On the basis of our experiments, we set this number to 50.

Fig. 21
figure 21

Classification error versus the number of nearest neighbors

For each object form k nearest neighbors, the PHT-based method is used to obtain five closest matches. For each of these images only one is chosen. In case the SURF-based classifier fails and is unable to find matching template, then the first nearest neighbor obtained from PHT is returned with appropriate matching score.

5 Results

The experiments and results of the proposed palmprint and knuckle feature extraction methods are reported in Sects. 5.1 and 5.2, respectively.

During experiments the system threshold value was estimated to provide the lowest false acceptance ratio (FAR) that would be equal to false rejection ratio (FRR).

5.1 Mask-based palmprint biometrics for mobile devices

In experiments we used our own database consisting of 252 images (there are 84 individuals, for each individual there are 3 images of the right hand). Standard mobile devices have been used (Canon, HTC, Motorola) and the resolution of images is 640 × 480.

The 10% of individuals in database were used as impostors, while the remaining 90% of images were used as genuine samples (one sample for testing and rest for registration).

The mask-based method effectiveness is evaluated in Fig. 22. The characteristics show that applying more than 50 masks does not increase the system effectiveness, which is important from computational overhead point of view.

Fig. 22
figure 22

Number of masks versus system effectiveness for different mask sizes

However, the mask resolution affects the system robustness more significantly. Increasing three times the width and height of masks (from 15 × 15 to 45 × 45) allows to reduce FAR and FRR from 3.4 to 1.7%.

During the experiments the mask extraction method was also evaluated against the system effectiveness. The results are shown in Fig. 23.

Fig. 23
figure 23

FAR and FRR versus system effectiveness for different masks extraction approaches

The experiments showed that humans fail to select more than 50 masks that will yield satisfactorily low error rates. This results might be influenced by the nature of the task (subjects were not trained) and by task complexity. Humans had difficulties in proper modeling of the masks.

The method based on randomly crated features masks yields fairly good results what is probably caused by lack of additional algorithm that would search for minimum FAR, FRR and CE errors. We think that employing genetic algorithm for this task may be sufficient and may give better results.

After evaluation, we found out that the best results were achieved by eigen-palms (PCA) approach.

5.2 Knuckle biometrics

In the performed experiments, we set up the following classification strategy:

  1. 1.

    to select 50 images on the basis of basic vector,

  2. 2.

    to select five images on the basis of PHT feature vector,

  3. 3.

    to select one closest match using SURF descriptor.

The proposed approach was tested using IIT Delhi Knuckle Database (http://webold.iitd.ac.in/∼biometrics/knuckle/iitd_knuckle.htm). The knuckle images were obtained from 158 individuals. Each individual contributed five image samples which implies 790 images in database. The database was fully acquired over a period of 11 months.

For efficiency assessment the fivefold method was applied (the same method as the authors of the database applied in [20]) and average of experiments results is presented. The average equal error rate obtained during experiments is 1.02%.

The Table 1 shows the EER deviation from its mean value and EER during each of experiments. The FAR and FRR versus system threshold for one of the experiments is shown in Fig. 24.

Table 1 ERR obtained during experiments
Fig. 24
figure 24

Knuckle biometrics: FAR versus FRR

The PHT gave 95.56% classification error while the SURF gave 85.75%.

The SURF method failed so often due to the fact it was unable to find match between query knuckle image and the template. Those fails were covered by PHT. However, the PHT failed when it came to distinct between two or more similar knuckles in k nearest neighbors. In this situation SURF was more accurate.

The experiments show that combination of PHT and SURF gives better results than each of this method used separately. The obtained results suggest that the used simple and fast line and texture extraction techniques are promising and give satisfactory results.

6 Conclusions

In this article, our developments in palmprint segmentation and feature extraction for human identification are presented. Moreover, we presented new approach to knuckle biometrics.

We showed that both palmprint and knuckles features may be considered as very promising biometric modalities which can be used in contactless human identification systems. Our goal was to propose efficient algorithm that can be run on mobile devices.

In this paper we showed the results for palmprint and knuckles biometrics, but on separate databases. Now we work on creating multimodal hand–palm–knuckle database acquired by mobile phones cameras in unrestricted (real-life) conditions.

Our methods can be used in mobile biometrics scenario since mobile end-terminals portfolio has exploded with devices providing greater functionality and usability with more processing power on board. It is estimated that by 2015 all the sold mobile handsets will be “smart” [29].

We believe that biometric human identification using contactless unsupervised images will very soon become important application.