Background

Data mining is an active area of research for mining or retrieving data/information from a large database or library. Image retrieval is the part of data mining in which the visual information (images) from the large size database or library is retrieved. Earlier, text-based retrieval was used for retrieving the information. In this process, the images are annotated with text and then text-based database management systems were used to perform image retrieval. Many advances, such as data modelling, multidimensional indexing, and query evaluation, have been made along this research direction. However, there exist two major difficulties, especially when the size of image collections is too large (tens or hundreds of thousands). One is the vast amount of labor required to annotate the images manually. The other difficulty, which is more essential, araise due to the rich content in the images and the subjectivity of human perception. That is, for the same image content different people may perceive it differently. To address these issues, content-based image retrieval (CBIR) came into existence. CBIR utilizes the visual content such as color, texture, shape, etc., for indexing and retrieving images from the database. The comprehensive and extensive literature on feature extraction for CBIR is available in [15].

Color does not only add beauty to images/video but provides more information about the scene. This information is used as the feature for retrieving the images. Various color-based image search schemes have been proposed, some of these are discussed in this section. Swain and Ballard [6] introduced the concepts of color histogram feature and the histogram intersection distance metric to measure the distance between the histograms of the images. The global color histogram is extensively utilized for the purpose of retrieval which gives probability of occurrences of each unique color in the image on a global platform. It is a fast method and has translation, rotation and scale invariant property, but it suffers from lack of spatial information which yields false retrieval results. Stricker et al. [7] proposed two new color indexing schemes. In the first approach they have used the cumulative color histogram. In the second method instead of storing the complete color distributions, first three moments of each color space of image are used. Idris and Panchanathan [8] have used the vector quantization technique to compress the image and from codeword of each image, they obtained the histogram of codewords which was used as the feature vector. In the same manner, Lu et al. [9] proposed a feature for color image retrieval by employing the combination of discrete cosine transform (DCT) and vector quantization technique. A well known image compression method i.e. block truncation coding (BTC) method is proposed in [10] for extracting two features i.e. block color co-occurrence matrix (BCCM) and block pattern histogram (BPH). Image descriptors are also generated with the help of vector quantization technique. Global histogram also suffers from spatial information. In order to overcome this problem Hauang et al. [11] proposed a color correlogram method which includes the local spatial color distribution of color information for image retrieval. Pass and Zabih [12] proposed the color coherence vectors (CCV) where a histogram based approach incorporates some spatial information. Rao et al. [13] proposed the modification in color histogram to achieve the spatial information and for this purpose they proposed three spatial color histograms : annular, angular and hybrid color histograms. Chang et al. [14] proposed a method which takes care of change in color due to change of illumination, the orientation of the surface, and the viewing geometry of the camera with less feature vector length as compared to color correlogram.

Texture is another important feature for CBIR. Smith et al. used the mean and variance of the wavelet coefficients as texture features for CBIR [15]. Moghaddam et al. proposed the Gabor wavelet correlogram (GWC) for CBIR [16]. Ahmadian et al. used the wavelet transform for texture classification [17]. Subrahmanyam et al. have proposed the correlogram algorithm for image retrieval using wavelets and rotated wavelets (WC + RWC) [18]. Ojala et al. proposed the local binary pattern (LBP) features for texture description [19] and these LBPs are converted to rotational invariant LBP for texture classification [20]. Pietikainen et al. proposed the rotational invariant texture classification using feature distributions [21]. Ahonen et al. [22] and Zhao et al. [23] used the LBP operator facial expression analysis and recognition. Heikkila et al. proposed the background modeling and detection by using LBP [24]. Huang et al. proposed the extended LBP for shape localization [25]. Heikkila et al. used the LBP for interest region description [26]. Li et al. used the combination of Gabor filter and LBP for texture segmentation [27]. Zhang et al. proposed the local derivative pattern for face recognition [28]. They have considered LBP as a nondirectional first order local pattern, which are the binary results of the first-order derivative in images. The block-based texture feature which use the LBP texture feature as the source of image description is proposed in [29] for CBIR. The center-symmetric local binary pattern (CS-LBP) which is a modified version of the well-known LBP feature is combined with scale invariant feature transform (SIFT) in [30] to describe the regions of interest. Yao et al. [31] have proposed two types of local edge patterns (LEP) histograms, one is LEPSEG for image segmentation, and the other is LEPINV for image retrieval. The LEPSEG is sensitive to variations in rotation and scale, on the contrary, the LEPINV is resistant to variations in rotation and scale. The local ternary pattern (LTP) [32] has been introduced for face recognition under different lighting conditions. Subrahmanyam et al. have proposed various pattern based features such as local maximum edge patterns (LMEBP) [33], local tetra patterns (LTrP) [34] and directional local extrema patterns (DLEP) [35] for natural/texture image retrieval and directional binary wavelet patterns (DBWP) [36], local mesh patterns (LMeP) [37] and local ternary co-occurance patterns (LTCoP) [38] for biomedical image retrieval. Reddy et al. [39] have extended the DLEP features by adding the magnitude information of the local gray values of an image. Hussain and trigges [40] have proposed the local quantized patterns (LQP) for visual recognition.

Recently, the integration of color and texture features are proposed for image retrieval. Jhanwar et al. [41] have proposed the motif co-occurrence matrix (MCM) for content-based image retrieval. They also proposed the color MCM which is calculated by applying MCM on individual red (R), green (G), and blue (B) color planes. Lin et al. [42] combined the color feature, k-mean color histogram (CHKM) and texture features, motif cooccurrence matrix (MCM) and difference between the pixels of a scan pattern (DBPSP). Vadivel et al. [43] proposed the integrated color and intensity co-occurrence matrix (ICICM) for image retrieval application. First they analyzed the properties of the HSV color space and then suitable weight functions have been suggested for estimating the relative contribution of color and gray levels of an image pixel. Vipparthi et al. [44] have proposed the local quinary patterns for image retrieval.

The concepts of LQP [40] and DLEP [35] have motivated us to propose the local quantized extrema patterns (LQEP) for image retrieval application. The main contributions of this work are summarized as follows. (a) The proposed method collects the directional quantized extrema information from the query/database image by integrating the concepts of LQP and DLEP. (b) To improve the performance of the CBIR system, the LQEP operator combines with RGB color histogram. (c) The performance of the proposed method is tested on benchmark databases for natural and texture image retrieval.

The paper is summarized as follows: In “Background”, a brief review of image retrieval and related work is given. “Review of existing local patterns”, presents a concise review of existing local pattern operators. The proposed system framework and similarity distance measures are illustrated in “Local quantized extrema patterns (LQEP)”. Experimental results and discussions are given in **Experimental Results and Discussion. The conclusions and future scope are given in **Conclusion.

Review of existing local patterns

Local binary patterns (LBP)

The LBP operator was introduced by Ojala et al. [19] for texture classification. Success in terms of speed (no need to tune any parameters) and performance is reported in many research areas such as texture classification [1821], face recognition [22, 23], object tracking [33], image retrieval [3339] and finger print recognition. Given a center pixel in the 3 × 3 pattern, LBP value is calculated by comparing its gray scale value with its neighboring pixels based on Eqs. (1) and (2):

$$ LBP_{P,R} = \sum\limits_{p = 1}^{P} {2^{(p - 1)} \times f_{1} (I(g_{p} ) - I(g_{c} ))} $$
(1)
$$ f_{1} (x) = \left\{ {\begin{array}{*{20}l} 1 & {x \ge 0} \\ 0 & {else} \\ \end{array} } \right. $$
(2)

where \( I(g_{c} ) \) denotes the gray value of the center pixel, \( I(g_{p} ) \) represents the gray value of its neighbors, \( P \) stands for the number of neighbors and \( R \), the radius of the neighborhood.

After computing the LBP pattern for each pixel (j, k), the whole image is represented by building a histogram as shown in Eq. (3).

$$ H_{LBP} (l) = \sum\limits_{j = 1}^{{N_{1} }} {\sum\limits_{k = 1}^{{N_{2} }} {f_{2} (} } LBP(j,k),l);\,l \in [0,\,(2^{P} - 1)] $$
(3)
$$ f_{2} (x,y) = \left\{ {\begin{array}{*{20}l} 1 & {x = y} \\ 0 & {else} \\ \end{array} } \right. $$
(4)

where the size of input image is \( N_{1} \times N_{2} \).

Figure 1 shows an example of obtaining an LBP from a given 3 × 3 pattern. The histograms of these patterns contain the information on the distribution of edges in an image.

Fig. 1
figure 1

Calculation of LBP

Block based local binary patterns (BLK_LBP)

Takala et al. [29] have proposed the block based LBP for CBIR. The block division method is a simple approach that relies on subimages to address the spatial properties of images. It can be used together with any histogram descriptors similar to LBP. The method works in the following way: First it divides the model images into square blocks that are arbitrary in size and overlap. Then the method calculates the LBP distributions for each of the blocks and combines the histograms into a single vector of sub-histograms representing the image.

Center-symmetric local binary patterns (CS_LBP)

Instead of comparing each pixel with the center pixel, Heikkila et al. [30] have compared center-symmetric pairs of pixels for CS_LBP as shown in Eq. (5):

$$ CS\_LBP_{P,R} = \sum\limits_{p = 1}^{P} {2^{(p - 1)} \times f_{1} (I(g_{p} ) - I(g_{p + (P/2)} ))} $$
(5)

After computing the CS_LBP pattern for each pixel (j, k), the whole image is represented by building a histogram as similar to the LBP.

Directional local extrema patterns (DLEP)

Subrahmanyam et al. [35] propsed directional local extrema patterns (DLEP) for CBIR. DLEP describes the spatial structure of the local texture based on the local extrema of center gray pixel \( g_{c} \).

In DLEP, for a given image the local extrema in 0°, 45°, 90°, and 135° directions are obtained by computing local difference between the center pixel and its neighbors as shown below:

$$ I^{\prime} (g_{i} ) = I(g_{c} ) - I(g_{i} );\quad i = 1,2, \ldots ,8 $$
(6)

The local extremes are obtained by Eq. (7).

$$ \hat{I}_{\alpha } (g_{c} ) = f_{3} (I^{{\prime }} (g_{j} ),I^{{\prime }} (g_{j + 4} ));\quad j = {{(1 + \alpha } \mathord{\left/ {\vphantom {{(1 + \alpha } {45)\,\,\forall \,\alpha = 0{^\circ },}}} \right. \kern-0pt} {45)\,\,\forall \,\alpha = 0{^\circ },}}\;45{^\circ },\,90{^\circ },\,135{^\circ } $$
(7)
$$ f_{3} (I^{{\prime }} (g_{j} ),I^{{\prime }} (g_{j + 4} )) = \left\{ {\begin{array}{*{20}l} 1 \quad & {I^{{\prime }} (g_{j} ) \times I^{{\prime }} (g_{j + 4} ) \ge 0} \\ 0 \quad & {else} \\ \end{array} } \right. $$
(8)

The DLEP is defined (α = 0°, 45°, 90°, and 135°) as follows:

$$ \left. {DLEP(I(g_{c} ))} \right|_{\alpha } = \left\{ {\hat{I}_{\alpha } (g_{c} );\,\hat{I}_{\alpha } (g_{1} );\,\hat{I}_{\alpha } (g_{2} ); \ldots \hat{I}_{\alpha } (g_{8} )} \right\} $$
(9)

Local quantized patterns (LQP)

Hussain and Trigges [40] have proposed the LQP operator for visual recognition. The LQP collects the directional geometric features in horizontal (H), vertical (V), diagonal (D) and antidiagonal (A) strips of pixels; combinations of these such as horizontal-vertical (HV), diagonal-antidiagonal (DA) and horizontalvertical-diagonal-antidiagonal (HVDA); and traditional circular and disk-shaped regions. Figure 2 illustrate the possible direction quantized geometric structures for LQP operator. The more details about LQP are available in [40].

Fig. 2
figure 2

The possible directional LQP structures

Local quantized extrema patterns (LQEP)

The operators DLEP [35] and LQP [40] have motivated us to propose the LQEP for image retrieval. The LQEP integrates the concepts of LQP and DLEP for image retrieval. First, the possible structures are extracted from the give pattern. Then, the extrema operation is performed on the directional geometric structures. Figure 3 illustrates the calculation of LQEP for a given 7 × 7 pattern. For easy understanding of the readers, in Fig. 3 the 7 × 7 pattern is indexed with pixel positions. The positions are indexed in a manner to write the four directional extrema operator calculations. In this paper, HVDA7 geometric structure is used for feature extraction. The brief description about the LQEP feature extraction is given as follows.

Fig. 3
figure 3

The LQEP calculation for a given 7 × 7 pattern using HVDA7 geometric structure

For a given center pixel (C) in an image I, the HVDA7 geometric structure is collected using Fig. 3. Then the four directional extremas (DE) in 0°, 45°, 90°, and 135° directions are obtained as follows.

$$ \left. {DE(I(g_{c} ))} \right|_{0^\circ } = \left\{ {f_{4} (I(g_{1} ),I(g_{4} ),I(g_{C} ));f_{4} (I(g_{2} ),I(g_{5} ),I(g_{C} ));f_{4} (I(g_{3} ),I(g_{6} ),I(g_{C} ))} \right\} $$
(10)
$$ \left. {DE(I(g_{c} ))} \right|_{45^\circ } = \left\{ {f_{4} (I(g_{13} ),I(g_{16} ),I(g_{c} ));f_{4} (I(g_{14} ),I(g_{17} ),I(g_{c} ));f_{4} (I(g_{15} ),I(g_{18} ),I(g_{c} ))} \right\} $$
(11)
$$ \left. {DE(I(g_{c} ))} \right|_{90^\circ } = \left\{ {f_{4} (I(g_{7} ),I(g_{10} ),I(g_{C} ));f_{4} (I(g_{8} ),I(g_{11} ),I(g_{C} ));f_{4} (I(g_{9} ),I(g_{12} ),I(g_{C} ))} \right\} $$
(12)
$$ \left. {DE(I(g_{c} ))} \right|_{135^\circ } = \left\{ {f_{4} (I(g_{19} ),I(g_{22} ),I(g_{C} ));f_{4} (I(g_{20} ),I(g_{23} ),I(g_{C} ));f_{4} (I(g_{21} ),I(g_{24} ),I(g_{C} ))} \right\} $$
(13)

where,

$$ f_{4} (x,y,c) = \left\{ {\begin{array}{*{20}l} 1 & {if\,(x > c)\,or\,(y > c)} \\ 1 & {if\,(x < c)\,or\,(y < c)} \\ 0 & {else} \\ \end{array} } \right. $$
(14)

The LQEP is defined by Eq. (10)–(13) as follows.

$$ LQEP = \left[ {\left. {DE(I(g_{c} ))} \right|_{0^\circ } ,\left. {DE(I(g_{c} ))} \right|_{45^\circ } ,\left. {DE(I(g_{c} ))} \right|_{90^\circ } ,\left. {DE(I(g_{c} ))} \right|_{135^\circ } } \right] $$
(15)

Eventually, the given image is converted to LQEP map with values ranging from 0 to 4095. After calculation of LQEP, the whole image is represented by building a histogram supported by Eq. (16).

$$ H_{LQEP} (l) = \sum\limits_{j = 1}^{{N_{1} }} {\sum\limits_{k = 1}^{{N_{2} }} {f_{2} (} } LQEP(j,k),l);\,l \in [0,\,4095];\, $$
(16)

where, LQEP(i,j) represents the LQEP map value ranging 0 to 4095.

Proposed image retrieval system

In this paper, we integrate the concepts of DLEP and LQP for image retrieval. First, the image is loaded and converted into gray scale if it is RGB. Secondly, the four directional HVDA7 structure is collected using the LQP geometric structures. Then, the four directional extremas in 0°, 45°, 90°, and 135° directions are collected. Finally, the LQEP feature is generated by constructing the histograms. Further, to improve the performance of the proposed method, we integrate the LQEP with color RGB histogram for image retrieval.

Figure 4 depicts the flowchart of the proposed technique and algorithm for the same is given as follows.

Fig. 4
figure 4

Proposed image retrieval system framework

Algorithm:

Input: Image; Output: Retrieval result

  1. 1.

    Load the image and convert into gray scale (if it is RGB).

  2. 2.

    Collect the HVDA7 structure for a given center pixel.

  3. 3.

    Calculate the local extrema in 0°, 45°, 90°, and 135° directions.

  4. 4.

    Compute the 12-bit LQEP with four directional extrema.

  5. 5.

    Construct the histogram for 12-bit LQEP.

  6. 6.

    Construct the RGB histogram from the RGB image.

  7. 7.

    Construct the feature vector by concatenating RGB and LQEP histograms.

  8. 8.

    Compare the query image with the image in the database using Eq. (20).

  9. 9.

    Retrieve the images based on the best matches

Query matching

Feature vector for query image Q is represented as \( f_{Q} = (f_{{Q_{1} }} ,f_{{Q_{2} }} , \ldots f_{{Q_{Lg} }} ) \) is obtained after the feature extraction. Similarly each image in the database is represented with feature vector \( f_{{DB_{j} }} = (f_{{DB_{j1} }} ,f_{{DB_{j2} }} , \ldots f_{{DB_{jLg} }} );\,j = 1,2, \ldots ,\left| {DB} \right| \). The goal is to select n best images that resemble the query image. This involves selection of n top matched images by measuring the distance between query image and image in the database \( DB \). In order to match the images we use four different similarity distance metrics as follows.

$$ Manhattan \, distance \, measure{:}\,D(Q,I_{1} ) = \sum\limits_{i = 1}^{Lg} {\left| {f_{{DB_{ji} }} - f_{Q,i} } \right|} $$
(17)
$$ Euclidean \, distance \, measure{:}\;D(Q,I_{1} ) = \left( {\sum\limits_{i = 1}^{Lg} {(f_{{DB_{ji} }} - f_{Q,i} )^{2} } } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} $$
(18)
$$ Canberra \, distance \, measure{:} \,D(Q,I_{1} ) = \sum\limits_{i = 1}^{Lg} {\frac{{\left| {f_{{DB_{ji} }} - f_{Q,i} } \right|}}{{\left| {f_{{DB_{ji} }} } \right| + \left| {f_{Q,i} } \right|}}} $$
(19)
$$ d_{1}\; distance \, measure{:}\,D(Q,I_{1} ) = \sum\limits_{i = 1}^{Lg} {\left| {\frac{{f_{{DB_{ji} }} - f_{Q,i} }}{{1 + f_{{DB_{ji} }} + f_{Q,i} }}} \right|} $$
(20)

where, L g represents the feature vector length, I 1 is database image and \( f_{{DB_{ji} }} \) is \( i{th} \) feature of \( j{th} \) image in the database \( DB \).

Experimental results and discussion

The performance of the proposed method is analyzed by conducting two experiments on benchmark databases. Further, it is mentioned that the databases used are Corel-1K [45], Corel-5K [45] and MIT VisTex [46] databases.

In all experiments, each image in the database is used as the query image. For each query, the system collects n database images X = (x 1 , x 2 ,…, x n ) with the shortest image matching distance computed using Eq. (20). If the retrieved image x i  = 1, 2,…, n belongs to same category as that of the query image then we say the system has appropriately identified the expected image else the system fails to find the expected image.

The performance of the proposed method is measured in terms of average precision/average retrieval precision (ARP), average recall/average retrieval rate (ARR) as shown below:

For the query image \( I_{q} \), the precision is defined as follows:

$$ Precision{:}\,P(I_{q} ) = \frac{Number\,of \, relevant \, images \, retrieved}{Total \, Number\,of \, images \, retrieved} $$
(21)
$$ Average\;Retrieval\,Precision{:}\, ARP = \frac{1}{{\left| {DB} \right|}}\sum\limits_{i = 1}^{{\left| {DB} \right|}} {P(I_{i} )} $$
(22)
$$ Recall{:}\,R(I_{q} ) = \frac{Number\,of \, relevant \, images \, retrieved}{Total \, Number\,of\,relevant \, images \, in\,the\,database} $$
(23)
$$ Average\,Retrieval\,Rate{:}\,ARR = \frac{1}{{\left| {DB} \right|}}\sum\limits_{i = 1}^{{\left| {DB} \right|}} {R(I_{i} )} $$
(24)

Experiment #1

In this experiment, Corel-1K database [44] is used. This database consists of large number of images of various contents ranging from animals to outdoor sports to natural images. These images have been pre-classified into different categories each of size 100 by domain professionals. Some researchers think that Corel database meets all the requirements to evaluate an image retrieval system, due its large size and heterogeneous content. For experimentation we selected 1000 images which are collected from 10 different domains with 100 images per domain. The performance of the proposed method is measured in terms of ARP and ARR as shown in Eq. (2124). Figure 5 illustrates the sample images of Corel-1K database.

Fig. 5
figure 5

Sample images of Corel-1K database

Table 1 and Fig. 6 illustrate the retrieval results of proposed method and other existing methods in terms of ARP on Corel–1K database. Table 2 and Fig. 7 illustrate the retrieval results of proposed method and other existing methods in terms of ARR on Corel–1K database. From Tables 1, 2, Figs. 6 and 7, it is clear that the proposed method shows a significant improvement as compared to other existing methods in terms of precision, ARP, recall and ARR on Corel–1K database. Figure 8a, b illustrate the analysis of proposed method (LQEP) with various similarity distance measures on Corel-1K database in terms of ARP and ARR respectively. From Fig. 8, it is observed that the d1 distance measure outperforms the other distance measures in terms of ARP and ARR on Corel-1K database. Figure 9 illustrates the query results of proposed method on Corel–1K database.

Table 1 The retrieval results of proposed method and various other existing method in terms of ARP at n = 20 on Corel-1 K database
Fig. 6
figure 6

Comparison of proposed method with other existing methods in terms of ARP on Corel-1K database

Table 2 The retrieval results of proposed method and various other existing method in terms of ARR at n = 20 on Corel-1K database
Fig. 7
figure 7

Comparison of proposed method with other existing methods in terms of ARR on Corel-1K database

Fig. 8
figure 8

Comparison of proposed method with various distance measures in terms of ARP and ARR on Corel-1K database

Fig. 9
figure 9

Query results of proposed method on Corel-1K database

Experiment #2

In this experiment Corel-5K database [45] is used for image retrieval. The Corel-5K database consists of 5000 images which are collected from 50 different domains have 100 images per domain. The performance of the proposed method is measured in terms of ARP and ARR as shown in Eq. (2124).

Table 3 illustrates the retrieval results of proposed method and other existing methods on Corel-5K database in terms of precision and recall. Figure 10a, b show the category wise performance of methods in terms of precision and recall on Corel-5K database. The performance of all techniques in terms of ARP and ARR on Corel-5K database can be seen in Fig. 10c, d respectively. From Table 3, Fig. 10, it is clear that the proposed method shows a significant improvement as compared to other existing methods in terms of their evaluation measures on Corel-5 K database. The performance of the proposed method is also analyzed with various distance measures on Corel-5 K database as shown in Fig. 11. From Fig. 11, it is observed that the d 1 distance measure outperforms the other distance measures in terms of ARP and ARR on Corel-5K database. Figure 12 illustrates the query results of the proposed method on Corel-5K database.

Table 3 Results of various methods in terms of precision and recall on Corel-5K database
Fig. 10
figure 10

Comparison of proposed method with other existing methods on Corel–5K. a Category wise performance in terms of precision, b category wise performance in terms of recall, c total database performance in terms of ARP and d total database performance in terms of ARR

Fig. 11
figure 11

Comparison of proposed method with various distance measures in terms of ARP on Corel-5K database

Fig. 12
figure 12

Query results of proposed method on Corel-5K database (top left image is the query image)

Experiment #3

In this experiment, MIT VisTex database is considered which consists of 40 different textures [46]. The size of each texture is 512 × 512 which is divided into sixteen 128 × 128 non-overlapping sub-images, thus creating a database of 640 (40 × 16) images. In this experiment, each image in the database is used as the query image. Average retrieval recall or average retrieval rate rate (ARR) given in Eq. (24) is the bench mark for comparing results of this experiment.

Figures 13, 14 illustrate the performance of various methods in terms of ARR and ARP on MIT VisTex database. From Figs. 13, 14, it is clear that the proposed method shows a significant improvement as compared to other existing methods in terms of ARR and ARP on MIT VisTex database. Figure 15 illustrates the performance of proposed method with different similarity distance measures in terms of ARR on MIT VisTex database. From Fig. 15, it is observed that d1 distance measure outperforms the other distance measures in terms of ARR on MIT VisTex database. Figure 16 illustrates the query results of proposed method on MIT VisTex database.

Fig. 13
figure 13

Comparison of proposed method with other existing methods in terms of ARR on MIT VisTex database

Fig. 14
figure 14

Comparison of proposed method with other existing methods in terms of ARP on MIT VisTex database

Fig. 15
figure 15

Comparison of proposed method with various distance measures in terms of ARR on MIT VisTex database

Fig. 16
figure 16

Query results of the proposed method on MIT VisTEx database (top left image is the query image)

Conclusions

A new feature descriptor, named local quantized extrema patterns (LQEP) is proposed in this paper for natural and texture image retrieval. The proposed method integrates the concepts of local quantization geometric structures and local extrema for extracting features from a query/database image for retrieval. Further, the performance of the proposed method is improved by combining it with the standard RGB histogram. Performance of the proposed method is tested by conducting three experiments on benchmark image databases and retrieval results show a significant improvement in terms of their evaluation measures as compared to other existing methods on respective databases.