1 Introduction

Biometrics is an accepted and dependable answer to solve the identity verification dilemma by identifying individuals based on the physiological or behavioral traits that are inherent to the person [5]. Physiological and behavioral traits that are normally utilized for biometric recognition are the face, fingerprint, iris, retina, DNA, signature, palm print, ear, voice, keystroke dynamics, hand geometry, and gait. Automated Fingerprint Identification System (AFIS) is recognized and acknowledged mostly by the whole world [18]. It is moreover turned out to be one of the significances in the security field that numerous researchers have attended to persist in carrying out research on it. The quality of the fingerprint image is essential to ensure the good performance of fingerprint recognition since the recognition process depends heavily on the quality of fingerprint images [6]. A good quality fingerprint image has high contrast and is well defined by ridge structures. The poor quality of the fingerprint image is marked by low contrast and does not have well-defined boundaries between the ridges [22]. By having a good quality of fingerprint images as input to an automated fingerprint recognition system, the result is more accurate. However, in many previous works existed,it was proved that poor quality of fingerprint images will affect the accuracy of fingerprint recognition results [23, 29].

The performance of a minutiae extraction algorithm relies heavily on the quality of the input fingerprint images. A scarred fingerprint tends to change and damage the structure of the fingerprint ridges and valleys. This circumstance will result in an unreliable minutiae extraction and affects the accuracy of the fingerprint recognition. This research is conducted to remove the scar by choosing the right broken ridges that need to be reconnected. This reconnection process is made to improve the structure of the broken ridges so that the quality of the fingerprint images can be enhanced. So, the true minutiae could be extracted. The hypothesis for this research is if the scar in the fingerprint image could be removed, then the accuracy of minutiae extraction can be increased. This research aims to develop an image enhancement approach to improve the quality of the scarred fingerprint images to generate an accurate minutiae extraction result.

The objectives to be achieved from this research are:

  1. 1.

    To develop a method for removing noise in the fingerprint images and at the same time to enhance the level of contrast.

  2. 2.

    To improve the current ridge structure enhancement algorithm to reconstruct the broken ridges affected by scars.

  3. 3.

    To evaluate the effectiveness of the proposed enhancement approach by using the extracted minutiae.

In order to reach the objectives, there are four main stages involved: preprocessing; image enhancement; feature extraction; and evaluation. Preprocessing occupied an image cropping process to remove the unwanted area in the fingerprint images. In the image enhancement stage, there are two main processes that need to be done are noise removal and ridges structure enhancement. The image enhancement process is essential to improve the quality of fingerprint images. Then, minutiae will be extracted in the feature extraction stage by using the minutiae marking method. Finally, the proposed image enhancement approach's performance and effectiveness will be measured in the evaluation stage. For this purpose, only scarred fingerprint images will be used. So, the contributions of this research can be summarized into three achievements, which are noise removal technique, ridges structure enhancement method, and the improved results of the quality index.

The organization of this paper is as follows: In Sect. 2, a review of previous studies related to scar finger images is investigated. In Sect. 3, the methodology of research is presented in detail. In Sect. 4, the experimental result is described and numerical results are presented. Finally, in Sect. 6, a general conclusion is drawn.

2 Related works

In the fingerprint images, there are global and local features. Global features are characteristics that can be seen by using naked eyes such as ridge pattern, core and delta areas. Local features are known as the minutiae points [1, 2]. The two most important local ridge characteristics are ridge ending and ridge bifurcation. These two local ridges are commonly known as minutiae. Ridge ending is defined as the point where a ridge ends while a ridge bifurcation is defined as the point where a ridge fork or diverges into branch ridges [18]. Minutiae could be accurately located from the thinned ridges and it can be easily detected. In some cases, ridge structures in the poor quality fingerprint image cannot be correctly detected because the structures are not well defined [25, 29]. Thus, several problems may occur in the system[22]. First, a significant amount of spurious minutiae could be created when ridges mistakenly attached together, for instance, due to the wetness or over inking [10]. Second, a high percentage of genuine minutiae may be ignored because the system falsely detects the minutiae points. Third, huge errors in minutiae localization that possibly established since the position of minutiae points changed, added, or reduced. These problems must be solved to preserve true ridge endings and bifurcations. It is crucial to occupy image enhancement techniques before the minutiae extraction process to get a more dependable estimation of the minutiae location [9, 22, 26]. Patel et al. [24] presented an algorithm to identify the valid minutiae. The proposed algorithm increases the acceptance rate and accuracy level. The proposed algorithm enhanced most of the phases of preprocessing for removing the noise and make the clear fingerprint image for feature extraction and enhanced the post-processing phases for eliminating false extracted minutiae, to extract exact core point detection and matching valid minutiae.

Gupta et al. [12] proposed fingerprint image enhancement and reconstruction using orientation and phase reconstruction. Gupta et al. [12] considered the minutiae density and the orientation field direction for the reconstruction of the fingerprint. Two public domain databases Fingerprint verification competition 2002 (FVC2002) and fingerprint verification competition 2004 (FVC2004) have been used for the experimental results and to validate the suggested methods for fingerprint reconstruction and enhancement [12]. Fang Zhang [30] proposed a Fusion positioning algorithm of indoor WiFi and Bluetooth based on a discrete mathematical model. The position fingerprint method is used to realize WiFi indoor positioning through two stages of “off-line/training” and “online positioning”[30]. Pedro Martins [21] proposed Improving Bluetooth beacon-based indoor location and fingerprinting. Pedro [21]introduces a method for beacon-based positioning, based on signal strength measurements at key distances for each beacon. This method allows using different beacon types, brands, and location conditions/constraints. Depending on each situation (i.e., hardware and location), it is possible to adapt the distance measuring curve to minimize errors and support higher distances, while at the same time keeping good precision.

In conclusion, most of the fingerprint recognition systems are based on minutiae matching. The performance, therefore, is dependent on the reliable extraction of the minutiae points. A precise extraction of minutiae is depended on the image quality. The quality of the image also depends on various factors like acquisition technique, variations in skin, and impression conditions. Most of these factors are also responsible for the occurrence of scars or creases on the surface of the fingerprint in the fingerprint image. These scars were produced because of their presence on the finger or due to the acquisition errors. The presence of scars leads to unreliable minutiae extraction as the system will falsely detect the fake minutiae caused by the broken ridges. Therefore, it is crucial to apply an image enhancement to enhance the low quality of fingerprint images as well as to improve the clarity of the fingerprint ridges and valleys structure caused by the scars and creases.

Khan [19] is presented a hardware architecture of a multimodal biometric system that massively exploits the inherent parallelism. The proposed system is based on multiple biometric fusion that uses two biometric traits, fingerprint and iris. In paper [16] is presented A robust and efficient fingerprint minutiae extraction in post-processing algorithm. Minutiae extraction-based fingerprint recognition is important for person authentication. However, missing and spurious minutiae were leads to inaccuracy. Herein an attempt has been made to overcome this challenge and achieve a higher degree of accuracy for minutiae extraction. In preprocessing, the fingerprint image was subjected to fast Fourier transform (FFT). Further the filtered fingerprint ridge and its valley were subjected to binarization and thinning followed by Rutovitz crossing number (CN) method. In post-processing, Graham's scan algorithm-based convex hull filtering technique (CHFT) was used effectively to eliminate the bogus minutiae lying on the boundary of the fingerprint.

3 Methodology

As mentioned earlier, the performance of a fingerprint recognition system based on minutiae depends heavily on fingerprint image quality. The scar fingerprint image is one of the low-quality types in the fingerprint images that becomes the problem in the minutiae extraction since broken ridges existed in the images. The broken ridges need to be reconnected to make sure that true minutiae can be extracted instead of fake minutiae. In order to solve the problem, this research was conducted to develop an image enhancement approach to improve the quality of fingerprint images. Figure 1 shows the methodology of this research. Based on the figure, the fingerprint image is acted as the input of the system. First, the fingerprint image will be preprocessed. Next, unwanted noise in the image that occurred during the data acquisition process will be eliminated using a new filter. Then, implementation of the enhancement method can be done according to the sequence of these processes which are: ridge orientation estimation; ridge frequency estimation; region mask calculation; and ridges reconstruction. This sequence is essential to produce a better result at the end of the research. Then, binarization and thinning process will be done. After that, the minutiae marking method will be performed to extract and mark the minutiae in the extraction stage. For the evaluation stage, three evaluations will be conducted. In this section, the method of this research will be explained in detail.

Fig. 1
figure 1

Research methodology

3.1 Data set

National Institute of Standards and Technology (NIST) [27] released a new version of the fingerprints database well suited for the development and testing of fingerprints classification systems in 1993. This upgrade version is called NIST Special Database14, Mated Fingerprint Cards Pairs 2, and version 2, hereinafter named DB14 in its short form [28]. Concerning that, the DB14 was acquired from The United States Department of Commerce, National Institute of Standard and Technology, and is used throughout this study. The dataset is considered a de facto standard fingerprints database that is used for developing and testing automated fingerprint classification systems [28].

The DB14 is made up of 54,000 8-bit grayscale fingerprint images obtained by the rolled method of fingerprint acquisitions scanned from 27,000 fingers. The fingerprints were scanned from two types of prints. The first type was fingerprint patterns created by rolling the individual’s ink-covered finger on the fingerprint card. The second type was patterns taken with a live scanning device and printed onto a fingerprint card. These cards were then scanned at 500 dpi resolution by a standard greyscale scanner. The dataset consists of mated fingerprint cards pairs, which are two cards from the same individual. The prints are 832 (W) × 768 (H) pixels. Each fingerprint was acquired two times. The first acquisition denotes the fingerprints from f0000001 to f0027000 is called file cards, the second acquisition denotes the fingerprints from s0000001 to s0027000 is called search cards. Moreover, The fingerprints are classified manually by experts using National Crime Information Centre (NCIC) classes assigned by FBI. Valid classes include: Arch (A), Tented-arch (T), Left-loop (L), Right-loop (R), Whorl (W) and scar. All classes and their references such as sex, scan type, image width, image length, image depth, resolution, compression, and color space are stored in a comment field in the Wavelet Scalar Quantization (WSQ) compressed file, allowing for comparison with hypothesized classes. In addition, Table 1 shows detailed information about the distribution of classes in the dataset, and it reveals that Left-loop, Right-loop and Whorl combined constitute exactly 93.29% of the fingerprints, which reflects the natural distribution of the population fingerprint classes. Apart from that, there are 21 fingerprints which have defective ridges structure and could not be classified even by human experts and are classified as scar type.

Table 1 Classes of fingerprint NIST special database 14 (f000001 to f027000 filenames)

In this research, only fingerprint images with scar will be chosen as input images for this research. A scar can be divided into two types: cuts and creases. The other fingerprint images” categories in the NIST Special Database 14 will not be used.

3.2 Preprocessing

In image processing, preprocessing is a stage that is commonly implemented as the first step to process an image. In this research, the process of image cropping is involved in the preprocessing stage. The datasets used in this research consist of the unwanted area. The unwanted area needs to be removed and so cropping the image is the solution for this matter. The crop image tool is a moveable and resizable rectangle that can be position interactively using the mouse.

3.3 Image enhancement approach

Noise can be defined as a process that affects the original image but it is not a part of the original image. The noise was introduced in an image by means of a scanner, sensor, or digital camera [11]. There are various types of noises that can corrupt the image such as salt-and-pepper and random valued noise. For better image processing, an image that frees from any noise is essential. Hence, it is necessary to apply a noise removal algorithm to enhance the quality of the degraded image. The noise removal algorithms reduce and remove the visibility of noise by smoothing the entire image leaving areas near contrast boundaries.

Apart from the image that is free from noise said to be of good image quality, the contrast of the image also plays an important role in determining the quality of the image. The contrast of any image is a very important characteristic that decides the quality of the image [17]. Contrast is an important factor in any individual estimation of image quality [13]. Low contrast images contain details that are not clearly visible and by applying the contrast enhancement, these details can be more clearly visible. Contrast enhancement techniques are able to produce output images with better appearance and high detailing as compare to the input images by increasing the gray-level differences among objects and background. Based on Fig. 1, there are two main processes under this stage. The processes are noise removal and ridges structure enhancement. Each process leads to a better quality of fingerprint images.

3.4 Design the noise removal method

Digital images are prone to a variety of noise types. Noise is the effect of errors in the image acquisition process that cause pixel values that do not reflect the true intensities of the real scene. There are several ways that noise can be introduced into an image, depending on how the image is created. This research will develop a new combination filter named Median Sigmoid (MS) filter. This filter is a combination of median filter and sigmoid function. The reason to propose this filter is to remove salt-and-pepper and Gaussian noise from the acquired image as well as to enhance the image to a suitable level of contrast. Hence, the enhanced image will have better quality for the next process.

3.5 Median sigmoid filter

Median Sigmoid (MS) filter is a combination between the median filter and modified sigmoid function. This filter aims to remove noise in the image and thus improve the contrast of the image to an appropriate level. Reducing noises, healing broken-up ridges, cleaning up ridge valleys, and increasing the contrast between ridges and valleys in the grayscale fingerprint images are major tasks of enhancement and restoration techniques [3]. A good quality image is said as free from noise and has good contrast in it. Hence, this research has come out with this new combination filter to achieve this goal. A median filter and sigmoid function have been applied for enhancing many images in the image processing field including fingerprint images. However, these two methods had never been combined for image enhancement. So, in this research, these two methods were processed as a new combination filter for the noise removal process to enhance the fingerprint image. Figure 2 shows the framework of the MS filter development proposed in this research. Based on the framework, the MS filter is developed by combining the median filter and the modified sigmoid function. To come out with a new formula of MS filter, a modification will be done toward the formula of median filter (Eq. 3) and modified sigmoid function (Eq. 4).

Fig. 2
figure 2

Framework of MS filter development

Below are the formulas for the median filtering implementation. In order to implement a median filter, a filter mask is needed. In this research, a 3 × 3 filter mask that consists of an 8-neighborhood with the center of (x,y), is used. Let say, I is the input image obtained after preprocessing, F is the 3 × 3 moving filter mask, and M is the output image. The pixel values in the 8-neighborhood filter mask are sorted in ascending order by using Eq. 1. Then, the median value is determined by using Eq. 2. The value of M(x,y), is then replaced by the obtained median value. This action is done by using Eq. 3.

$$ {\text{Order}}\,{\text{Set}} = F_{\left( 0 \right)} \le F_{\left( 1 \right)} \le F_{\left( 2 \right)} \cdots \le F_{{\left( {N - 2} \right)}} \le F_{{\left( {N - 1} \right)}} $$
(1)
$$ F_{{{\text{median}}}} = \left\{ {\begin{array}{*{20}l} \hfill {\frac{{F_{{\left( {N/2} \right)}} + F_{{\left( {N/2 - 1} \right)}} }}{2},} & \hfill {{\text{for }}\,N\,{\text{even}}} \\ \hfill {F_{{\left( {\left( {N - 1} \right)/2} \right)}} ,} & \hfill {{\text{for}}\, N\,{\text{ odd}}} \\ \end{array} } \right. $$
(2)
$$ M\left( {x,y} \right) = {\text{median}}\left\{ {I\left( {x,y} \right);\left( {x,y} \right) \in F} \right\} $$
(3)
$$ {\text{Matrix}}\,{\text{for}}\,{\text{median}}\,{\text{filtering}}, \, M\left( {x,y} \right) = \left[ {\begin{array}{*{20}c} 1 & 1 & 1 \\ 1 & {{\text{median}}} & 1 \\ 1 & 1 & 1 \\ \end{array} } \right] $$
(4)

where N—Number of pixels in the neighbourhood, F((0))—Minimum value, F((N−1))—Maximum value.

The operation of obtaining the median means that very large or very small values (noisy values) will end up at the top or bottom of the sorted list. Thus, the median will replace a noisy value with one closer to its surroundings. As the 3 × 3 filter mask keeps moving in the entire image, all the steps are repeated until all pixel values in the image were replaced with new pixel values. Now, the modified sigmoid function let say S(x) is given as Eq. 4:

$$ S\left( x \right) = 1/(1 + e^{{c*\left( {{\text{th}}} \right)}} ) $$
(4)

where, S(x)—Sigmoid function, c—Contrast factor, c = 12, th—Threshold value.

The contrast factor is used to determine the most wanted degree of the contrast depending on the degree of darkness or brightness of the input image. For the c value, a value of about 5 is neutral which is it gives little changes to the enhanced image. On the other hand, a value of 1 will reduces contrast to about 20% of the input image and a value of 10 will increases the contrast by about 2.5 times. For effective contrast enhancement, the value of c should be lies in the range of 10–25 [20]. After some testing was done using several values of c based on the given range which is 10–25, this research decided to use 12 as the fixed value of c. This is because 12 is the most suitable value as it produced suitable brightness for the image. With c value of more than 12, the produced image becomes too bright and so eliminated several original information of the fingerprint. Thus, 12 is the most suitable value of c for the enhancement in this research.

By using and modifying both equations of the median filter and modified sigmoid function, the MS filter formula is given as Eqs. 5 and 6:

$$ {\text{Mask}} = c*\left( {{\text{th}}{-}M\left( {x,y} \right)} \right) $$
(5)
$$ {\text{MS}}\left( {x,y} \right) = \frac{1}{{1 + e^{{{\text{Mask}}}} }} $$
(6)

where MS(x,y)—Enhanced pixel value, c—Contrast factor, th—Threshold value, th = 0.05, M(x,y)—Input image.

This research used the value of 0.05 for th value. With this fixed value of th, the produced image gives low normalized gray values in the image. This research needs to produce a good high contrast image to generate clear and brighter ridges and valleys structure appearance. Hence, 0.05 is the most suitable value for th as it produced a good contrast image. This new combination filter performs directly toward each pixel of an image. The filter mask passed over the image pixel by pixel starting from the upper right corner. Each intensity value of the pixel in the output image is equal to its intensity value in the input image added to the value of this mask.

3.6 The MS filter algorithm

Based on the formulas and the descriptions of the formulas that have been described earlier, there are several steps to implement this filtering which are presented in Algorithm 1. Based on the algorithm, an MS filter is implemented. First, load an input image, let say I(x, y). In order to apply the median filter, a filter mask window is needed. For this research, a 3 × 3 filter mask window is used. By using this filter mask, the input image was scanned and the pixels in the window will be sorted to find the median value. Then, the pixel center of the filter mask will be replaced with the median value. This was done by using Eq. 3 toward the input image. These steps are repeated to the whole pixels in the image which finally the new filtered image is produced, let say M(x,y).

The next step is to apply the sigmoid function. M(x,y) will be used as the input image in the MS filter mask. By using 12 for c value and 0.05 for th value, Eq. 5 is carried out to store Mask. Then, by using the value of Mask, compute Eq. 6 to produce a new image, let say MS(x,y). MS(x,y) is the enhanced image from the MS filtering implementation.

Figure 3 shows an example of MS filtering results before and after its implementation. Figure 3a shows the input image; meanwhile, Fig. 3b shows the output image of MS filter. Figure 3c shows the cropped area with a size of 21 × 21 pixels of Fig. 3a that is marked by the red rectangle. Figure 3d shows the cropped area of Fig. 3b that is marked by the red rectangle. As can be seen in those figures, the clarity of the ridges structure is clearer after the implementation of the MS filter since the contrast was altered. In addition, the background in the output image is cleaner as the noise in the background was removed.

Fig. 3
figure 3

Result of MS filtering: a input image; b image after MS filtering; c 21 × 21 pixels of input image cropped, d 21 × 21 pixels of output image cropped

3.6.1 Design the ridges structure enhancement

Ridge frequency will be able to determine the right width between two ridges. By using these parameters, the right selection of ridge endings can be made. Hence, an algorithm is designed for the enhancement of fingerprint ridges structure. Figure 1 shows the flow of the enhancement process that will be implemented in this research. There are four main steps for the enhancement process: Orientation field estimation; Ridge frequency estimation; Region mask estimation; and Ridges reconstruction. First, orientation field estimation will be conducted to determine the direction of the ridges structure in the fingerprint image. Second, ridge frequency will be determined. The orientation and frequency of parallel ridges and valleys are said to be the fingerprint's intrinsic properties. Third, the region mask will calculate the recoverable region or unrecoverable region in a block of an input fingerprint image. Lastly, these recoverable regions will be used in the ridges reconstruction method to produce a new enhanced image. After that, minutiae will be extracted from the thinned image which produced after the binarization and thinning process by using the minutiae marking method. The number of extracted minutiae will be stored. In this research, the gradient will be derived from a filtered image. The filtered image here is an image produced by applying the Gaussian low-pass (GLP) filter toward the input image, MS(x, y). In the frequency domain, one type of GLP filter is just a constant intensity white circle surrounded by black. The value of sigma plays an important role in producing a good filtered image's gradient to be used to estimate the orientation of the ridges.

This research would determine potential sigma values that are suitable for this research instead of using the fixed value of sigma proposed by Hong et al. [14]. The aim of determining the potential sigma value is to produce a better orientation image. This suggestion is to improve the GLP filter used in Hong et al. [14].

3.6.1.1 Enhancement of GLP filter by identifying sigma

The sigma (also known as variance or standard deviation) value is used to perform GLP filters in the orientation estimation algorithm. This filter is performed to modify the incorrect local ridge orientation [14]. Sigma (σ) is a parameter which controls the variation around its mean value which it will determine the shapes of the Gaussian function. To allow the filter size to vary according to the bandwidth of the Gaussian waveform, the filter size to be a function of the sigma parameter is set as 6 times which is covering the range of [− 1.5/sigma, 1.5/sigma]. This 6σ functions as amplitude.

To select the best sigma value for this improvement process, six potential values of sigma, σ = 10, 20, 30, 40, 50 and 60; will be used as the manipulated parameter. In the original algorithm by Hong et al. [14], sigma value was empirically set to fixed values of 4. An experiment was conducted to determine the best sigma value that suitable for this process by calculating the distance vector of each image. This experiment is an appearance-based method to see the different between two images as the sigma value changes.

Figure 4 shows the gradient of orientation image produced for each sigma value. Based on the figure, gradient of orientation image by using = 4 is not smooth enough and same goes with = 10 and = 20. However, sigma value starting from = 30 to = 60, the gradient of the image becomes smoother and cleaner.

Fig. 4
figure 4

Gradient of filtered image by using different sigma values: a = 4 proposed by Hong et al. [14], b = 10; c = 20; d = 30; e = 40; f = 50; and g = 60

Figure 5 shows the orientation image produced for each sigma value, = 4, 10, 20, 30, 40, 50, and 60. By looking at the orientation map (red dot line) on the fingerprint image, it is clearly seen that by using the sigma value proposed by Hong et al. [14] shown in Fig. 5a, the orientation map is not correct, especially at the dark area in the image. The directions of the orientation that were plotted are not the same as the fingerprint ridges. Therefore, by increasing the value of sigma, the orientation map becomes more reliable. However, starting sigma equal to 30, there are not many differences can be spotted between those plotted orientations (Fig. 5d–g). There are different but too little to be noticed.

Fig. 5
figure 5

Ridge orientation by using different sigma values: a = 4 proposed by Hong; b = 10; c = 20; d = 30; e = 40; f = 50; and g = 60

For selecting the best sigma value among those potential values, a measurement of similarity between two images based on the vector will be computed. Table 2 shows the distance vector values between images of different sigma values being compared. D1 is the distance vector between gradient images produced by using σ = 10 and σ = 20. D2 is the distance vector between gradient images produced by using σ = 20 and σ = 30. Meanwhile, D3 is the distance vector between gradient images produced by using σ = 30 and σ = 40. D4 is the distance vector between gradient images produced by using σ = 40 and σ = 50. Lastly, D5 is the distance vector between gradient images produced by using σ = 50 and σ = 60. The value of D1 and D2 is almost the same, which is 0.103 and 0.108, respectively. The same goes with the value between distance vectors D3, D4, and D5, those three vectors have values that are near to each other. However, the difference between the distance vector value of D2 and D3 is larger than the difference between other distance vectors values are.

Table 2 The acquired distance vector values of five distance vectors

Figure 4 presents a graph of distance vector values between gradient images using different sigma values. This graph was plotted based on the distance vector values recorded in Table 2. From the graph, the line between D2 and D3 has a slope. This is unlike the lines starting from D3 to D5, as those lines look flat. This means that there is a big difference between D2 and D3 values. Since both of the distance vectors are computed from the gradient image produced by using σ = 30, hence this research proposed this value as the best sigma value to be used to improve the GLP filter. This improvement will generate a reliable orientation image for the ridges structure reconstruction (Fig. 6).

Fig. 6
figure 6

Graph of distance vector values between gradient images using different sigma values

Figure 7 shows the difference between orientation images using σ = 4 and σ = 30. The orientation map in Fig. 7b is much better and more accurate than in Fig. 7a where there are several parts, which are represented by the yellow circles, clearly showing the difference in orientation. Thus, by using σ = 30, the orientation map was greatly enhanced.

Fig. 7
figure 7

Orientation map: a σ = 4 by Hong; b Proposed sigma value, σ = 30

3.6.1.2 Ridges structure reconstruction algorithm

With the best selection of sigma values from the experiment, the next process is to implement the ridges structure reconstruction algorithm. In this algorithm, four steps will be implemented to reconstruct the damaged structure of the fingerprint ridges. The first step is to estimate local ridge orientation by calculating the gradients from the filtered image after applying GLP filter. The second step is to estimate local ridge frequency by computing x-signature of the fingerprint ridges and valleys. The third step is to determine the recoverable and unrecoverable region of the image to be processed afterward. The final step is to implement image filtering by using a Gabor filter (in Sect. 3.3.2.2.4) to reconstruct the structure of the fingerprint ridges by using obtained ridge orientation and frequency. This algorithm calculation will use σ = 30 as the fixed value for sigma.

3.6.2 Ridge orientation

By using a noise-free fingerprint image from the previous process, the orientation of fingerprint ridges will be estimated. Steps to implement the orientation estimation algorithm by using the input image MS are as follows:

1. Divide image MS into a block size of w × w. For this research, the value for w is 15.

2. Calculate the gradients, ∂x(i, j) and ∂y(i, j) at each pixel (i, j).

The computation of ∂x used the horizontal Sobel operator (Eq. 7):

$$ {\text{Horizontal}}\,{\text{Sobel}}\,{\text{operator}} = \left( {\begin{array}{*{20}c} { - 1} & { - 2} & { - 1} \\ 0 & 0 & 0 \\ 1 & 2 & 1 \\ \end{array} } \right),\partial_{x} = \left( {\begin{array}{*{20}c} 1 & 0 & { - 1} \\ 2 & 0 & { - 2} \\ 1 & 0 & { - 1} \\ \end{array} } \right) $$
(7)
$$ {\text{Vertical}}\,{\text{Sobel}}\,{\text{operator}} = \left( {\begin{array}{*{20}c} { - 1} & 0 & 1 \\ { - 2} & 0 & 2 \\ { - 1} & 0 & 1 \\ \end{array} } \right),\quad \partial_{y} = \left( {\begin{array}{*{20}c} 1 & 2 & 1 \\ 0 & 0 & 0 \\ { - 1} & { - 2} & { - 1} \\ \end{array} } \right) $$
(8)

The computation of ∂y used the vertical Sobel operator (Eq. 8):

4 At each block center at pixel (i,j), estimate the local orientation by using the following Eqs. 9, 10, 11:

$$ V_{x} \left( {i,j} \right) = \mathop \sum \limits_{{u = i - \frac{w}{2}}}^{{i + \frac{w}{2}}} \mathop \sum \limits_{{v = j - \frac{w}{2}}}^{{j + \frac{w}{2}}} 2\partial_{x} \left( {u,v} \right)\partial_{y} \left( {u,v} \right) $$
(9)
$$ V_{y} \left( {i,j} \right) = \mathop \sum \limits_{{u = i - \frac{w}{2}}}^{{i + \frac{w}{2}}} \mathop \sum \limits_{{v = j - \frac{w}{2}}}^{{j + \frac{w}{2}}} \partial_{x}^{2} \left( {u,v} \right)\partial_{y}^{2} \left( {u,v} \right) $$
(10)
$$ \theta \left( {i,j} \right) = \frac{1}{2}\tan^{ - 1} \left( {\frac{{V_{y} \left( {i,j} \right)}}{{V_{x} \left( {i,j} \right)}}} \right) $$
(11)

where, θ(i,j)-Least square estimation of local orientation at the block centered at pixel (i,j), Mathematically, θ(i,j), represents the direction that is orthogonal to the dominant direction of the Fourier spectrum of the w × w window.

4. The incorrect local orientation can be fixed by using a Gaussian low-pass filter. To perform this filter, a continuous vector field is needed and can be computed by using Eqs. 12 and 13:

$$ \emptyset_{x} \left( {i,j} \right) = \cos \left( {2\theta \left( {i,j} \right)} \right) $$
(12)
$$ \emptyset_{y} \left( {i,j} \right) = \sin \left( {2\theta \left( {i,j} \right)} \right) $$
(13)

where, ∅xx components of vector fields, yy components of vector fields.

Then, GLP filtering can be performed as Eqs. 14 and 15:

$$ \emptyset_{x}^{^{\prime}} \left( {i,j} \right) = \mathop \sum \limits_{{u = - \frac{{w_{\emptyset } }}{2}}}^{{\frac{{w_{\emptyset } }}{2}}} \mathop \sum \limits_{{v = - \frac{{w_{\emptyset } }}{2}}}^{{\frac{{w_{\emptyset } }}{2}}} W\left( {u,v} \right)\emptyset_{x} \left( {i - uw,j - vw} \right) $$
(14)
$$ \emptyset_{y}^{^{\prime}} \left( {i,j} \right) = \mathop \sum \limits_{{u = - \frac{{w_{\emptyset } }}{2}}}^{{\frac{{w_{\emptyset } }}{2}}} \mathop \sum \limits_{{v = - \frac{{w_{\emptyset } }}{2}}}^{{\frac{{w_{\emptyset } }}{2}}} W\left( {u,v} \right)\emptyset_{y} \left( {i - uw,j - vw} \right) $$
(15)

where w—Two-dimensional GLP filter with unit integral, w × w—Size of the filter.

In the original algorithm [14], the size of the filter mask was set to fixed values of 11. The filter size controls the spatial extent of the filter. However, the fixed filter size is not optimal in that it does not allow the accommodation of Gaussian waveforms of different sized bandwidths. Hence, to allow the filter size to vary according to the bandwidth of the Gaussian waveform, the filter size to be a function of the sigma parameter is set as the Eq. 16:

$$ w_{\emptyset } = 6\sigma $$
(16)

where w—Size of the filter mask, σ—Sigma value.

5 Compute the local ridge orientation at (i,j) by using Eq. 17:

$$ O\left( {i,j} \right) = \frac{1}{2}\tan^{ - 1} \left( {\frac{{\emptyset_{y}^{^{\prime}} \left( {i,j} \right)}}{{\emptyset_{x}^{^{\prime}} \left( {i,j} \right)}}} \right) $$
(17)

Figure 8 shows the differences between the estimated orientation image from the proposed algorithm and by Hong et al. [14]. By comparing the ridge orientation image in the red box between Fig. 8a and b, it proved that by using the proposed value of sigma, the output of the orientation image could be enhanced. The direction of the ridges structure that is produced and plotted by using the proposed sigma value is better than using the sigma value used by Hong et al. [14]

Fig. 8
figure 8

Comparison of orientation map: a orientation image by the proposed algorithm, w = 6σ with σ = 30; b orientation image by Hong et al. [14], w = 11 with σ = 4

5.1 Ridge frequency

Generally, there are two types of fingerprint features: global and local. Ridge frequency belongs to global features. The frequency illustrates a local distance between ridges at each point of the fingerprint image. It has been widely used for fingerprint image enhancement [8, 14]. This feature is essential in the fingerprint filter design as the fingerprint ridges have a variety of widths that correspond to ridge frequencies. To compute the frequency of the fingerprint ridges, the input image, MS, and the estimated orientation image, O will be used. In Hong [14], the image was divided into a block size of 16 × 16. The frequency image produced by using a block size of 64 × 64 produced better results instead of using a block size of 16 × 16. This is because there are so many unrecoverable regions if a block size of 16 × 16 is used. Too many unrecoverable regions will affect the final image of this algorithm. To choose the most suitable block size, this research tested on block size of 32 × 32 and 128 × 128.

Table 3 shows the wall time consumed to estimate the frequency image of different sizes of the block. The larger the size of the block, the shorter the time consumed for the frequency estimation process. This is because the image is divided into a large block and it reduced the time of computing. Although the block size of 128 × 128 consumed the fastest time compared to the other size of blocks, in the worst case, it might damage the produced frequency image if there are unrecoverable regions. After selecting a block size of 64 × 64 as the most suitable block size to be used, the next process is to estimate the ridge frequency. The steps involved in local ridge frequency estimation are as follows:

  1. 1.

    Divide the input image MS into blocks size of w × w (64 × 64).

  2. 2.

    For each block centered at pixel (i,j), compute an oriented window of size l × w (128 × 64) that is defined in the ridge coordinate system.

  3. 3.

    For each block centered at pixel (i,j), compute the x-signature, X[0], X[1], …., X[l−1], of the ridges and valleys within the oriented window, where

    $$ X\left[ k \right] = \frac{1}{w}\mathop \sum \limits_{d = 0}^{w = 1} MS\left( {u,v} \right), k = 0,1, \ldots ,l - 1 $$
    (18)
    $$ u = i + \left( {d - \frac{w}{2}} \right)\cos \left( {i,j} \right) + \left( {k - \frac{l}{2}} \right){\text{sin}}\left( {i,j} \right) $$
    (19)
    $$ v = i + \left( {d - \frac{w}{2}} \right)\sin \left( {i,j} \right) + \left( {\frac{l}{2} - k} \right){\text{cos}}\left( {i,j} \right) $$
    (20)
Table 3 The wall time of the frequency estimation for different size of blocks

For the case that there is no minutiae appear in the oriented window, a discrete sinusoidal-shaped wave is formed by x-signature. Hence, the frequency of the ridges and valleys can be determined from the x-signature by using Eq. 21. If there are no peaks were detected, then the frequency is assigned a value of − 1.

$$ \Omega \left( {i,j} \right) = 1/T\left( {i,j} \right) $$
(21)

where Τ(i,j)—Average number of pixels between two peaks in the x-signature, Ω(i, j)—Frequency image.

5.2 Region mask

There are three categories of regions of interest for a digital fingerprint image. The first category is well-defined region where ridges and valleys of fingerprint images are clearly differentiated from one another. The second category is a recoverable corrupted region where ridges and valleys are damaged by scars, smudges and many more. However, the ridges and valleys can still be able to be seen and the neighboring regions can still give satisfactory information. The third category is an unrecoverable corrupted region where ridges and valleys are severely damaged up to the ridges structure cannot be seen clearly and the neighboring does not provide satisfactory information.

Based on the formed wave shape assessment by the local ridges and valleys, the classification of pixels into recoverable and unrecoverable categories could be performed. Thus, there are three features that are used to characterize the sinusoidal-shaped wave: amplitude (α), frequency (β), and variance (γ). Let X [1], X [2], …, X[l] be the x-signature of a block centered at (i, j). The three features corresponding to a pixel (block) (i, j) are computed as follows:

  1. 1.

    α = (average height of the peaks – the average depth of the valleys).

  2. 2.

    β = 1/T(i, j), where T(i, j)is the average number of pixels between two consecutive peaks.

  3. 3.

    \(\gamma = \frac{1}{t}\mathop \sum \limits_{i = 1}^{l} \left( {X\left[ i \right] - \left( {\frac{1}{t}\mathop \sum \limits_{i = 1}^{l} X\left[ i \right]} \right)} \right)^{2}\) (21)

5.3 Image filtering for reconstruction

Gabor filter consists of both orientation-selective and frequency-selective properties and has an optimal joint resolution in both spatial and frequency domains[7, 15]. So that it is acceptable to use the Gabor filter as the bandpass filter for removing noise in the image and preserve the true structure of ridges and valleys of the fingerprint. Once the ridge orientation and ridge frequency information has been determined, some parameters such as , σx, σy, x, y are used to construct the even-symmetric Gabor filter. Figure 9 below shows the final image of the broken ridges reconstruction process. The broken ridges as referring in the red rectangle were reconnected.

Fig. 9
figure 9

Enhanced image

5.4 Feature extraction

To extract the minutiae, three templates are designed to detect and mark the minutiae location in the fingerprint image. The first template is to find ridge bifurcation along the ridges. The second template is to find a ridge ending. The third template is designed for a special case where a general bifurcation maybe triples counted. A 3 × 3 pixel window will be used for the templates. This process will be done toward the thinned image because minutiae points are easy to be found in the thinned image. Then, the number of extracted minutiae will be stored in the evaluation stage.

6 Experimental results

For this research, three evaluations executed: (a) Effectiveness measurement will be discussed in Sect. 4.1; (b) Percentage of minutiae changes will be discussed in Sect. 4.2, and (c) Quality index measurement will be discussed in Sect. 4.3. Then a comparison between the result of the proposed image enhancement approach and the previous works and constraints with drawbacks will be discussed in Sects. 4.4 and 4.5.

6.1 Performance and effectiveness measurement of MS filter

In this section, the performance of the proposed filter in terms of visual inspection will be presented. The effectiveness measurement of this filter was calculated based on the distance vector also known as the dissimilarity value. The initial result has been displayed in Tables 4 and 5. The result also will be discussed at the end of this section.

Table 4 Average of distance vector value of 10 different fingerprint images for the first category (right thumb)
Table 5 Average of distance vector value of 10 different fingerprint images for second category (right index)

6.1.1 Effectiveness measurement

For this evaluation, 100 samples of fingerprint images will be used to measure the similarity of fingerprint images by using the median filter, sigmoid function, and MS filter. For the training dataset, 100 original images were categorized into 10 categories. Each category is referring to each finger. In one category, there are 10 images from 10 different persons. MS filter was applied to these images to obtain filtered images from the original images. On the other hand, for the testing dataset, Gaussian noise with different variance values from 0.01 to 0.05 was added to the 100 images. Then, the MS filter was applied to remove the noise. The filtered images produced will be used as the test dataset. For better understanding, Fig. 10 shows the way the effectiveness measurement is being implemented.

Fig. 10
figure 10

Framework of effectiveness measurement computation of MS filter

The computation was implemented by using the Eqs. (22, 23, 24):

$$ {\text{For}}\,{\text{training}}\,{\text{dataset}}: {\text{Normalized}}\,{\text{Vector}},a^{\prime } = \frac{a}{\left| a \right|} $$
(22)
$$ {\text{For}}\,{\text{testing}}\,{\text{dataset}}:\,{\text{Normalized}}\,{\text{Vector}},\, b^{{\prime }} = \frac{b}{\left| b \right|} $$
(23)
$$ \begin{aligned} {\text{Distance}}\,{\text{vector}}\,{\text{between}}\,{\text{two}}\,{\text{images}} & : \,{\text{Distance}},\,d = \sqrt {\left( {a^{\prime } - b^{\prime } } \right)^{2} } \\ & = \sqrt {\left( {a^{\prime } - b^{\prime } } \right)^{T} - \left( {a^{\prime } - b^{\prime } } \right)} \\ \end{aligned} $$
(24)

where T is for the function of transpose.

Table 4 shows the average distance vector value of 10 different fingerprint images for the first category which is the right thumb. By using Eqs. 22, 23, 24, the distance vector values for all the 10 different images with different variance values for Gaussian noise were computed. Based on Table 4, the MS filter acquired the largest ratio value compared to the median filter and sigmoid function with a ratio of 1:3.114.

Table 5 shows the average distance vector value of 10 different fingerprint images for the second category, which is the right index. Based on Table 5, it can be seen that the MS filter obtained the largest ratio value, which is 1: 3.398 compared to the median filter and sigmoid function.

Based on both Tables 4 and 5, the ratio between average values for the same images and average values for the different images for the MS filter is the largest compared to the median filter and sigmoid function. It is believed that the smaller the average distance vector value is, the similar the images would be. By using this hypothesis, it can be said that the Median filter obtained the smallest average distance values for each variance. This result proved that the median filter is very efficient. However, the ratio value between average distance values for two same images and two different images being compared determined the best filter. It is believed that the larger the ratio value is, the better the recognition performance would be. This hypothesis is to give an idea about if two same images are being compared, the distance vector value should be small, logically and mathematically. The reason is the information in those images is fairly similar. On the other hand, if two different images are being compared, the distance vector value should be large because the information in those images is not comparable to each other. Hence, based on the explanation, the best filter for removing the noise is determined by the filter with the largest ratio value.

6.2 Results of enhancement and extraction

Figure 11 shows the thinned image that was produced before and after the thinning and minutiae marking process. In Fig. 11, it can be seen that the scar area on the right side of the fingerprint (refer to Fig. 11a) had been fixed where the broken ridges were reconnected (refer to Fig. 8b). Besides, the marked minutiae in the fingerprint image without the enhancement process is shown in Fig. 11c and the marked minutiae in the fingerprint image with the enhancement process are shown in Fig. 11d. Figure 12 shows another example of several images obtained as the output from the MS filtering, ridge structure enhancement, and minutiae extraction process.

Fig. 11
figure 11

Minutiae marking: a and c output images without enhancement process; b and d output images with enhancement process

Fig. 12
figure 12

Output images of MS filtering, ridge reconstruction, and minutiae marking

6.2.1 Evaluation of minutiae changes percentage

From the extraction process, the number of extracted minutiae was already counted and stored. This evaluation was conducted to observe the difference between the number of detected minutiae without and with the enhancement process. Figure 13 shows the framework of the pursued evaluation. Based on the evaluation framework shown in Fig. 13, the number of detected minutiae was counted in the thinned image that was produced with and without the enhancement process. For testing purposes, 20 fingerprint images that contained scars from NIST Special Database 14 were used as the datasets to evaluate the effectiveness of the proposed enhancement approach. As mentioned earlier, minutiae to be extracted are ridge ending and ridge bifurcation. Thus, this evaluation will calculate the percentage of ridge ending changes and the percentage of ridge bifurcation changes. Those percentages can be computed using the following Eqs. 25 and 26 (Fig. 14):

Fig. 13
figure 13

Framework of the percentage of minutiae changes evaluation

Fig. 14
figure 14

Quality index calculation flow

Percentage ridge ending changes:

$$ {\text{RE}}\,{\text{Change}} = \frac{{{\text{RE}}_{{{\text{before}}}} - {\text{RE}}_{{{\text{after}}}} }}{{{\text{RE}}_{{{\text{before}}}} }} \times 100 $$
(25)

Percentage ridge bifurcation changes:

$$ {\text{RB}}\,{\text{Change}} = \frac{{{\text{RB}}_{{{\text{before}}}} - {\text{RB}}_{{{\text{after}}}} }}{{{\text{RB}}_{{{\text{before}}}} }} \times 100 $$
(26)

The number of ridge ending and bifurcation without implement enhancement were stored as REbefore and RBbefore respectively. Meanwhile, the number of ridge ending and bifurcation with the enhancement process was stored as REafter and RBafter respectively. By using Eq. 25, the percentage of ridge ending changes can be computed. First, the number of a detected ridge ending in the thinned image without the enhancement will subtract the number of the detected ridge ending in the thinned image with the enhancement. Value from the subtraction is then divided by the number of the detected ridge ending in the thinned image without the enhancement. To obtain the percentage, the value from the division is multiplied by a hundred. The same steps are applied to Eq. 26 for the ridge bifurcation changes percentage calculation.

6.2.2 Result of minutiae changes percentage

Table 6 shows the percentage of minutiae for ridge ending and bifurcation changes in the thinned images with and without the enhancement process by using 20 datasets. Based on Table 6, the number of detected ridge bifurcation without the enhancement is larger than the number of the detected ridge ending without the enhancement. The largest number of detected ridge ending and bifurcation without the enhancement are 1661 and 3331 respectively, which belongs to the dataset f0000078.bmp. However, with the implementation of the enhancement process, the number of detected ridge bifurcations is smaller than the number of detected ridge ending. The largest number of a detected ridge ending with the enhancement is 501, which belongs to dataset f0000026.bmp, and the largest number of detected ridge bifurcation with the enhancement is 153, which belong to the dataset f0000079.bmp. This means that numbers of the fake minutiae were removed from the images and true minutiae were detected as well as the ridges structure of the fingerprint was fixed by using the proposed enhancement approach.

Table 6 Percentage of minutiae (ridge ending and bifurcation) changes in the thinned images without and with the enhancement process

6.3 Result of the quality index measurement

The quality index in this study indicates the minutiae extraction accuracy. This evaluation aims to measure the performance of the proposed image enhancement algorithm quantitatively. The quality of an image is measured by using the extracted minutiae points obtained in the feature extraction stage.

In this research, the quality of the enhanced image is measured by comparing the minutiae set detected by a human expert with the minutiae set that was detected during the minutiae extraction stage. For this evaluation, only minutiae points in the scar area of the fingerprint images will be taken into account. This limitation area is to highlight the focus of this research, which is the broken ridges (scar area). The quality index of the enhanced image can be calculated by using the following Eq. 27:

$$ {\text{Quality Index, }}\,{\text{QI}} = \frac{c}{c + f + u} $$
(27)

where c—Correctly detected minutiae (by human expert), f—Falsely detected minutiae, u—Undetected minutiae.

Table 7 shows the result of the quality index by using the proposed enhancement approach. In the second column of the table, the numbers of correctly detected minutiae, c for each dataset were recorded. In this case, a human expert detected the minutiae for each dataset. In the third column, the numbers of falsely detected minutiae by the system, f were recorded. Besides, numbers of undetected minutiae by the system, u for each dataset were also recorded in the fourth column. In the last column, the computed quality indexes by using Eq. 24 were recorded. Based on Table 7 , the mean quality index value of 20 dataset images by using the proposed image enhancement approach is 0.51 whereas the standard deviation of the quality index is 0.30. In this case, the mean value of the quality index is more than 0.50, which passed the minimum value of the quality index to be achieved in this research. This value proved that the proposed approach has successfully increased the quality of scarred fingerprint images.

Table 7 Result of the quality index by using the proposed enhancement approach

Figure 15 shows the number of correctly detected minutiae, c bar chart by a human expert. The minimum number of correctly detected minutiae is zero, which belongs to dataset f0000070.bmp. Here, the value 0 indicates no minutiae were detected. However, the largest number of detected minutiae by a human expert is 23 and it belongs to dataset f0000084.bmp. This value indicates that there is a total of 23 minutiae that existed in the focus area. Figure 16 shows a bar chart regarding the number of falsely detected minutiae, f for each dataset by using the proposed enhancement approach. From the chart, there are four datasets with no falsely detected minutiae. This means that the system accurately detected the genuine minutiae in those datasets. However, the system falsely detected minutiae in one of the datasets, which is dataset f0000080.bmp, with the highest number of 40. This happened because the scar is big, long, and located in the bright area of the fingerprint image. Thus, the system failed to reconstruct the ridges structure because of this situation.

Fig. 15
figure 15

Number of correctly detected minutiae, c chart by a human expert

Fig. 16
figure 16

Number of falsely detected minutiae, f chart by using the proposed enhancement approach

Figure 17 shows the number of undetected minutiae, u bar chart by using the proposed enhancement approach. Based on the chart, dataset f0000078.bmp recorded the highest number of undetected minutiae, which are eight. For this dataset, the system should detect 11 minutiae points but only three minutiae points were correctly detected by the system. The other eight minutiae points were undetected by the system. On the other hand, the system accurately undetected any minutiae points in five datasets since all the minutiae points were correctly detected by the system.

Fig. 17
figure 17

Number of undetected minutiae, u chart by using proposed enhancement approach

In this research, there are several causes for this evaluation calculation. For the first case, if the value of c, f and u is 0, then the quality index value, QI automatically becomes 1. For example on dataset f0000070.bmp, the number of correctly detected minutiae by a human expert is 0 as shown in Fig. 18a and no missing and undetected minutiae were detected by the system in the enhanced image as shown in Fig. 18b. This means that the minutiae extraction algorithm was correctly functioned and gives an accurate result. As for another case, the system detected a larger value of falsely detected minutiae than the correctly detected minutiae. This happened if the scar area is in the dark or bright area in the fingerprint images and this area will be calculated as an unrecoverable area for ridges structure enhancement. For example on dataset f0000081.bmp, the scar area is located in the dark area of the fingerprint image (Fig. 18c). This dark area was calculated as the unrecoverable region during the region mask calculation. Hence, this area cannot be processed and thus affected the final image to be produced. Because of this affected the final image, the system falsely detected 20 minutiae as shown in Fig. 18d. This is the reason the number of falsely detected minutiae was larger than the correctly detected minutiae.

Fig. 18
figure 18

The first and second case of quality index calculation, dataset f0000070.bmp and f0000081.bmp: a no detected minutiae in the scar area; b no detected minutiae in the scar area of the enhanced image c scar in the dark area with three minutiae detected by a human expert; d 20 falsely detected minutiae by the system in the enhanced image

6.4 Comparison between the result of the proposed image enhancement approach and the previous works

To compare the proposed approach with other approaches, quality index calculation of 20 enhanced images by using the enhancement approaches used by Hong et al. [14], Chikkerur et al. [8] and Bana and Kaur [4] was computed and the result is shown in Tables 8, 9 and10, respectively. The comparison of mean and standard deviation values between those approaches is recorded in Table 11. Table 8 shows the results of the quality index obtained by using Hong et al. [14] approach.

Table 8 Result of the quality index by using Hong et al. [14] technique
Table 9 Result of the quality index by using Chikkerur et al. [8] technique

The number of correctly detected minutiae, c for each dataset in the second column of Table 9 is the same as Table 8 because the same datasets were used. The highest number of falsely detected minutiae, f recorded is 59, which belongs to dataset f0000079.bmp. On the other hand, the lowest number of falsely detected minutiae, f recorded is 0 and it belongs to two datasets. In the fourth column of Table 8, the highest number of undetected minutiae recorded was 9 which belongs to dataset f0000084.bmp. Here, the mean of the quality index is 0.37 and the standard deviation is 0.27. As mentioned earlier, the minimum value of the quality index that needs to be achieved is 0.50. The mean value of the quality index using the approach proposed by Hong et al. [14] is less than the targeted minimum value. Hence, this result shows that the quality index of images by using the proposed enhancement approach in this research is better than Hong et al. [14].

Table 9 shows the results of the quality index obtained by using Chikkerur et al. [8] approach. The number of correctly detected minutiae, c for each dataset in the second column of Table 9 is the same as Tables 7 and 8 because the same datasets were used. The highest number of falsely detected minutiae, f recorded is 47, which belongs to dataset f0000078.bmp. On the other hand, the lowest number of falsely detected minutiae, f recorded is 0 and it belongs to two datasets. In the fourth column of Table 9, the highest number of undetected minutiae recorded was 9 which belongs to dataset f0000078.bmp. Here, the mean of the quality index is 0.33 and the standard deviation is 0.21. The mean quality index value from this approach is also less than the targeted minimum value and even lesser than Hong et al. [14]. Hence, the result obtained proved that this approach proposed by Chikkerur et al. [8] is unable to increase the quality of scarred fingerprint images. Table 10 shows the results of the quality index obtained by using Bana and Kaur [4] approach. The highest number of falsely detected minutiae, f recorded is 103, which belongs to dataset f0000080.bmp. On the other hand, the lowest number of falsely detected minutiae, f recorded is 8 and it belongs to dataset f0000070.bmp. The highest number of undetected minutiae recorded was 16, which belongs to dataset f0000084.bmp. Here, the mean of the quality index is 0.11 and the standard deviation is 0.07. Both mean and standard deviation values obtained here are the and lowest compared to Hong et al. [14], Chikkerur et al. (s2005), and the proposed approach. Thus, this approach proposed by Bana and Kaur [4] is not suitable to be used as an approach to enhance the quality of scarred fingerprint images.

Table 10 Result of the quality index by using Bana and Kaur [4] technique

Figure 19a described the comparison of falsely detected minutiae between approaches for 20 datasets. Based on the chart in Fig. 18, the number of falsely detected minutiae by using Bana and Kaur [4] approach was recorded as the highest, which are 103 compared to the other approaches. This is because the approach is not suitable for solving the problem of scarred fingerprint images. Meanwhile, the highest falsely detected minutiae by using the proposed enhancement approach is only 40 since the proposed enhancement approach in this research was designed to solve the scar problem. In the comparison of undetected minutiae number between approaches shown in Fig. 19b, the highest number of undetected minutiae was recorded by using Bana and Kaur [4] approach, which are 16 minutiae. In this case, the system failed to detect minutiae correctly because there are possibilities that two ridge endings were connected or the ridge bifurcation turned out as two ridge endings after the enhancement process. In the meantime, the highest number of undetected minutiae was recorded by using the proposed enhancement approach with only eight undetected minutiae. The result obtained shows that the proposed enhancement approach decreased the number of undetected minutiae compared to the other three approaches.

Fig. 19
figure 19

Comparison of a falsely detected, b undetected minutiae number between approaches chart

Table 11 and Fig. 20 show the comparison of the quality index measurement between the approaches. Based on Table 11 and Fig. 20, it can be seen that the mean value of quality index calculation by using the proposed approach is the largest compared to the other approaches. The same goes for the standard deviation value, where the proposed approach gave the largest value. As stated before, the larger the value of the quality index, the better the minutiae extraction algorithm and the enhancement approach that was used. Hence, it can be concluded that the proposed enhancement approach in this research was successfully improved the quality of the scarred fingerprint images.

Table 11 Comparison of quality index value between approaches
Fig. 20
figure 20

Comparison of quality index measurement between the approaches bar chart

Based on the results in this section, even though there are some cases where there is the unsuccessful reconnection between the broken ridges, the proposed approach still gives the best results in the evaluation of the quality index compared to the other approaches and proved since it has the largest mean and standard deviation value. Hence, it can be concluded that the proposed approach was successful in improving the quality of the scarred fingerprint images.

7 Conclusion

In reference to the overall finding of this research, it can be summarized that fingerprint image enhancement plays a very important role in the AFIS system. Fingerprint images acquired are not always well defined due to the variations in impression condition, ridge configuration, skin condition, acquisition devices, and noncooperative attitude of subjects, a significant percentage of the acquired fingerprint image is of poor quality image. Besides, the low contrast of fingerprint images also affected the image quality. These factors lead to some problems especially in the performance of minutiae extraction and fingerprint recognition. This research focused on the enhancement of the fingerprint image that contained broken ridges caused by scars. These broken ridges may lead to incorrect minutiae detection in feature extraction. The performance of the fingerprint recognition decreased caused by inaccurate results of minutiae extraction. Based on the results of all the evaluations, it can be concluded that each of the objectives in this research was achieved. This research improved the quality of scarred fingerprint images with the use of a new filter and enhanced the method of ridges structure construction.