1 Introduction

Texture analysis and classification are topics of interest due to their numerous possible applications, such as medical imaging, industrial quality control and remote sensing. A wide variety of methods for texture analysis has been developed such as co-occurrence matrices [11], Markov random fields [34], autocorrelation methods [24, 29], Gabor filtering [6, 9, 16, 18, 21, 32] and wavelet decomposition [33]. These methods mostly concern intensity images and since color information is a vector quantity an adaptation to the color domain is not always straightforward. Regarding color texture, the possible approaches can be divided in three categories [25] called parallel, sequential and integrative. In the parallel approach [22, 27] textural features are extracted solely from the luminance plane of an image and are used together with color features. The sequential approach [12] involves a quantization of the color space and subsequently the extraction of statistical features from the indexed images.

The integrative approach [5, 14, 15, 20, 25] is the most popular one and it describes color texture by combining color information with the spatial relationships of image regions within each color channel and between different color channels. The simplest integrative approach would only consist of a gray scale transformation of the input image but in many cases this has been proven insufficient. A very common advance of the integrative approach is based on the opponent-process theory of human color vision that has its roots in neuroscience. Ewald Hering [13] first noted that there are some color combinations that humans are not able to see, such as reddish-green or yellowish-blue, since these colors contrast each other strongly. Hence, he proposed that such color combinations can be the components of one vision mechanism that oppose each other through a process of excitatory and inhibitory responses. A popular application of this theory in computer vision is the Gaussian color model [8].

In this contribution we introduce a novel integrative approach towards color texture classification and recognition based on adaptive filters through supervised learning. The kernels we use are initialized as two-dimensional Gabor filters. A 2D Gabor filter acts as a local band-pass filter and can achieve optimal joint localization both in the spatial and frequency domains [4]. Given a set of labeled color images (RGB) for training and a bank of 2D Gabor filter kernels the goal here is to learn a transformation of a color image to a single channel (intensity) image and an optimal adaptation of the kernels such that the responses of the transformed images when filtered with the optimized kernels will yield the best possible classification.

Many signal processing techniques are based on insights or empirical observations from neurophysiology or optical physics. The proposed novel approach incorporates data-driven adaptation of the system, e.g. example based learning. Furthermore, the “family” of filters used in our approach can be substituted, depending on the data domain and the task at hand. As an example we explore in this paper the use of rotation and scale invariant descriptors based on Gabor filter responses [10]. We demonstrate that our approach yields very good generalization ability.

The paper is structured as follows: In Sects. 2 and 3 we present overviews of the existing approaches for color texture analysis and the Learning Vector Quantization algorithm respectively. In Sect. 4 the Color Image Analysis LVQ is explained in detail and Sect. 5 presents experimental results. Finally, in Sect. 6 we draw conclusions.

2 Overview of Existing Approaches

In texture analysis Gabor filter responses and Local Binary Patterns are two very popular types of descriptor that have been extended to color texture via integrative approaches that are using the opponent color model.

Jain et al. [15] proposed an approach that extends the use of features extracted from Gabor filter responses to color texture classification motivated by mechanisms of human vision. For this purpose they compute features from each color channel independently (unichrome features), as well as features that capture the spatial correlation between spectral bands (opponent features). Let h imn be the response of the i-th color channel of a given image when filtered with a Gabor kernel with scale m and orientation n.The unichrome features are defined as the square root of the energy of the Gabor responses:

$$ \mu_{imn} = \sqrt{ \biggl( \sum_{x,y} h^2_{imn}(x,y) \biggr)}. $$

The opponent features are based on the difference of normalized energies between different color channels and scales in the same orientation. The difference of normalized energies is:

$$ d_{ijm'mn} = \biggl( \frac{h_{imn}}{\mu_{imn}} - \frac {h_{jm'n}}{\mu_{jm'n}} \biggr) $$

thus defining the opponent feature:

$$ \psi_{ijm'mn} = \sqrt{ \biggl( \sum_{x,y} d^2_{ijm'mn}(x,y) \biggr)} . $$

All unichrome and opponent features are concatenated into a single feature vector that is used as a descriptor for the given image. In the following we refer to this technique as Opponent Color Features (OCF).

Local Binary Patterns (LBP) are based on the idea that texture can be described by local spatial patterns and gray scale contrast. The original LBP operator [23] creates labels for the image pixels by thresholding their 3×3 neighborhood with the center value. The pixels with lower intensities than the center pixel are labeled with 0, whereas those with equal or higher intensity values are labeled with 1. The labels are read clockwise as a binary number. This process is repeated for every pixel and the histogram of the 256 possible binary numbers is then used as a texture descriptor. The LBP operator was further extended to use neighborhoods of different sizes [24] using circular neighborhoods and bi-linearly interpolated values at non-integer pixel coordinates. In the following, the notation (P,R) is used to denote pixel neighborhoods formed by P sampling points on a circle of radius R. Another extension to the original operator is the definition of the so called uniform patterns, which can be used to reduce the length of the feature vector and implement a simple rotation-invariant descriptor. This extension was inspired by the fact that some binary patterns occur more commonly in texture images than others. A local binary pattern is called uniform if it contains at most two transitions from 0 to 1 or vice versa when it is traversed circularly. Ojala et al. [24] noticed in their experiments that uniform patterns account for a little less than 90 % of all patterns when using a (8,1) neighborhood and for around 70 % with a (16,2) neighborhood. After the LBP labeled image f l (x,y) has been obtained, the descriptor is defined as:

$$ H_i=\sum_{x,y} I \bigl \{f_l(x,y)=i \bigr\}, \quad i=0,\ldots,n-1 . $$

The Color LBP extension [28] is based on the ability to take the local threshold (neighborhood center) from n different color channels. The neighborhood to be thresholded can also be taken from these channels, which makes up a total of n 2 different combinations. The n 2 histogram descriptors are then concatenated into a single feature vector.

3 Review of the (Generalized Matrix) Learning Vector Quantization

Learning Vector Quantization (LVQ) is a supervised prototype-based classification method [17]. The training is based on data points x i∈ℝD and their corresponding label information y i∈{1,…,C}, where D denotes the dimension of the feature vectors and C the number of classes. A set of prototypes is characterized by their location in the feature space w i∈ℝD and the respective class label c(w i)∈{1,…,C}. Classification is implemented as a winner-takes-all scheme. For this purpose, a possibly parameterized dissimilarity measure d Ω is defined, where Ω specifies the metric parameters which can be adapted during training. Given d Ω(x,w), any data point x is assigned to the class label c(w i) of the closest prototype w i with d Ω(x,w i)≤d Ω(x,w j) for all ji. The position of the closest (“winner”) prototype in the feature space is then adapted according to a learning rule, i.e. w i is moved closer to x if the data point is correctly classified and moved away from x if otherwise. The number of prototypes used to represent a class can be chosen by the user according to the nature of the data and the task at hand. The typical number of prototypes assigned to each class varies from 1 to 5.

A training scheme called Generalized LVQ (GLVQ) [30] is derived as a minimization of the cost function:

$$ f_c\bigl(d^\varOmega,J,K\bigr) = \sum _i \varPhi \biggl(\frac{d^\varOmega(\mathbf {x}^i,\mathbf {w}^J)-d^\varOmega(\mathbf {x}^i,\mathbf {w}^K)}{d^\varOmega(\mathbf {x}^i,\mathbf {w}^J) + d^\varOmega (\mathbf {x}^i,\mathbf {w}^K)} \biggr) $$
(1)

where the quantities

(2)
(3)

correspond to the distances of the feature vector x i from the respective closest correct prototype w J and the closest wrong prototype w K. Φ must be a monotonic function and throughout the following the identity Φ(x)=x is used.

Generalized Matrix Learning Vector Quantization (GMLVQ) is an extension of the original algorithm with adaptive dissimilarity measure based on the quadratic form:

$$ d^\varOmega(\mathbf {x},\mathbf {w}) = (\mathbf {x}-\mathbf {w})^\top \varOmega^\top\varOmega(\mathbf {x}-\mathbf {w}) $$
(4)

The matrix Λ=Ω Ω is assumed to be positive (semi-) definite. Hence the measure corresponds to a (squared) Euclidean distance in an appropriately transformed space

$$ d^\varOmega(\mathbf {x},\mathbf {w}) = \bigl[\varOmega(\mathbf {x}-\mathbf {w}) \bigr]^2 $$
(5)

Specific restrictions may be imposed on the transformation Ω∈ℝM×D with MD without loss of generality. For M<D, Ω transforms the D-dimensional data into a lower M-dimensional space. This variant is referred to as Limited Rank Matrix LVQ (LiRaM LVQ) and explained in [1, 2]. The original algorithm follows a stochastic gradient descent for the optimization of the cost function (Eq. (1)). The gradients are evaluated with respect to the contribution of single instances x i, which are presented in random order and sequentially during training. The algorithm has been introduced and discussed in [31] and will be modified in the subsequent sections.

4 Color Image Analysis Learning Vector Quantization

In this contribution we present an extension of the GMLVQ concept, that is especially designed for color texture analysis. We use the same cost function, Eq. (1), as in the original GMLVQ algorithm and follow a stochastic gradient descent procedure where the samples x i of the training set are sequentially presented and the parameters accordingly updated. We will refer to this algorithm as Color Image Analysis LVQ (CIA-LVQ) and to one sweep through the training set as one epoch E.

Let D be a data set of color images of a priorly known size (p×p) that belong to C different classes and a bank of filter kernels G, initialized as a sum of Gabor filters with different scales and orientations. The goal is to learn one or more matrices Ω k that transform the color images into a single-channel, “intensity” image, a set of optimized kernels \(\widehat{G}_{k}\) and a set of prototypes w k such that the filter responses of the transformed images will yield the best possible classification. In addition, we use an adaptation of the learning rates that allows the system to be less dependent on their initial values.

We use for both the filter kernels and the images their representation in the Fourier domain. The image data are vectorized thus resulting in a data set of complex vectors x i∈ℂN, where N=pp⋅3, with p denoting the image patch size. These vectors are transformed by Ω k ∈ℂM×N, where M=pp. The transformation Ω k ∈ℂM×N can be considered as the equivalent of a color to gray scale image transformation, with k referring to the index of a prototype w k or the index of its class label for class-wise transformations. Subsequently, the transformed image data are filtered with every kernel G l G and the l responses are summed up. The filter kernels are also represented as complex vectors G l ∈ℂM. The general form of the descriptor of an individual image is denoted as:

$$ \mathbf {r}_k^i = \mathbf {x}^i\varOmega_k^\top *\sum _l G_l $$
(6)

where ∗ denotes the convolution. Each such descriptor is associated with a label y i∈1,2,…,C.

Note that, Eq. (6) describes only one convolution with a sum of kernels \(\widehat{G} = \sum_{l} G_{l}\). At this point, this is not of conceptual value. Since the algebraic property of distributivity holds for the operation of convolution in the Fourier domain, Eq. (6) yields a result identical to what is described above and can be simplified as:

$$ \mathbf {r}_k^i = \mathbf {x}^i\varOmega_k^\top *\widehat{G} $$
(7)

This obviously offers a gain in processing time, especially for larger filter banks and also simplifies the optimization process. In the following we optimize the sum of kernels \(\widehat{G}\).

We define the dissimilarity measure as:

$$ d^{\varOmega_k}_{\widehat{G}_k}\bigl(\mathbf {x}^i, \mathbf {w}^k\bigr) = \bigl\lVert\bigl|\mathbf {r}_k^i\bigr|^2 - \bigl|\mathbf {w}^k\bigr|^2\bigr\rVert^2 $$
(8)

which corresponds to the difference of magnitudes between a prototype and an image descriptor. In this fashion we ensure that two images containing the same texture pattern are considered similar, independent of the position within the image where this pattern occurs.

4.1 Explicit Form of the Learning Rules

The learning rules of CIA-LVQ can be derived from the dissimilarity measure as presented in Eq. (8) by taking the derivatives with respect to the parameters w k, Ω k and \({\widehat{G}_{k}} \). The parameter updates read as follows:

(9)
(10)
(11)

where

(12)
(13)
(14)

In Eqs. (9)–(14) L∈{J,K} and α,ϵ and η are the learning rates for the prototypes, the transformation matrix and the kernel used for filtering respectively.

The derivatives with respect to the closest correct w J and closest wrong prototype w K together with the corresponding matrices Ω J , Ω K and the filter kernels \({\widehat{G}}_{J}\), \({\widehat{G}}_{K}\) for the given training data point x i read:

(15)
(16)
(17)

with denoting the complex conjugate and

(18)
(19)

Note, that since we are working with complex values we have to take all derivatives with respect to the real and imaginary parts respectively.

4.2 Adaptation of the Learning Rates

Steepest descent methods rely upon the choice of the suitable magnitude for the update step (learning rate). Very small steps usually only slow down convergence, whereas very large steps might result in oscillatory or divergent behavior. In the case of CIA-LVQ the update steps are denoted as α,ϵ and η and the issue of choosing their values is addressed by considering way-point averages over a number of latest iteration steps together with an efficient step size adaptation. This technique is being discussed in [26] for normalized gradients, but in CIA-LVQ we use its basic principles without the normalization.

The general form of the update of a parameter x is an iterative process with an initial learning rate value ψ 0 and an initial parameter value x 0. At every iteration step the cost function f c (x j ) is computed. At first we perform k>1 unaltered gradient steps as follows:

$$ \mathbf {x}_{j+1} = \mathbf {x}_j - \psi_j\Delta \mathbf {x}_j $$
(20)

for j=0,1,…,k−1 with ψ j =ψ 0. Consequently, apart from the current gradient step \(\tilde{\mathbf {x}}_{t+1}\) we also compute the way-point average of the previous k steps:

$$ \hat{\mathbf {x}}_{t+1} = \frac{1}{k}\sum _{i=0}^{k-1} \mathbf {x}_{t-i} $$
(21)

We determine the new position of the parameter x t+1 and the new step size ψ t+1 as:

(22)
(23)

As long as a simple gradient descent step yields a position for the parameter x that results in lower costs than the average of the k latest positions of x, the iterative process remains unaltered. On the other hand, \(f_{c}(\tilde{\mathbf {x}}_{t+1}) > f_{c}(\hat{\mathbf {x}}_{t+1})\) indicates that the step size is too large and should be reduced by a factor β.

In the next section we experiment with the algorithm and show its use in practice.

5 Experiments

In order to evaluate the usefulness of the proposed algorithm, we perform classification on patches of pictures taken from the VisTex [3] and the KTH-TIPS [7] databases. From the VisTex database we use 29 color images with size 128×128 pixels from the groups Bark, Brick, Fabric and Food. The KTH-TIPS set is used in its original form and consists of 810 color images with size 200×200 pixels from 10 different classes: Sandpaper, Aluminium Foil, Sponge, Styrofoam, Corduroy, Linen, Brown Bread, Cotton, Orange Peel and Cracker. Although in texture classification literature every image is often considered as a different class, here we distinguish into four and ten different classes respectively, which are equivalent to the conceptual groups that the images belong to. Despite its increased difficulty, this classification task allows us to better demonstrate the ability of CIA-LVQ to describe general characteristics of real-world texture patterns.

We split both data sets in two subsets. One subset is used for training whereas the other is never seen during training and we use it for evaluation. Figures 1 and 2 depict the training and evaluation images from the VisTex database. Figures 3 and 4 depict examples of training and evaluation images respectively from the KTH-TIPS database.

Fig. 1
figure 1

Images, used to provide patches for training and test (VisTex)

Fig. 2
figure 2

Images, used to provide patches for evaluation (VisTex)

Fig. 3
figure 3

Images, used to provide patches for training and test (KTH-TIPS)

Fig. 4
figure 4

Images, used to provide patches for evaluation (KTH-TIPS

For our experiments we draw 15×15 patches randomly from each image. The training subsets of images are further divided in training and test sets of patches. The VisTex training subset consists of 200 patches per image. We use 150 patches per image (2400 data points) for training and test the performance of CIA-LVQ on the remaining 50 patches per image (800 data points). With respect to the KTH-TIPS training subset we draw 9 patches per image and use 6 for training (3240 data points) and the remaining 3 (1640 data points) for testing. The test sets may contain patches which partially overlap with those used for training. Therefore we use the images in Figs. 2 and 4 in order to create evaluation sets that have never been seen in the training process and thus better demonstrate the generalization ability of the proposed approach. The evaluation sets consist of 50 and 6 randomly drawn patches per image for VisTex and KTH-TIPS respectively.

A note is due here to the nature of the filters used for initialization. A 2D Gabor filter is defined as a Gaussian kernel function modulated by a sinusoidal plane wave. All filter kernels can be generated from one basic wavelet by dilation and rotation. In these experiments we initialize the adaptive filter banks as follows: Every bank consists of 16 Gabor filters of bandwidth equal to 1 at eight orientations θ=0, 22.5, 45, 67.5, 90, 112.5, 135 and 157.5 degrees and two scales (wavelengths) varying by one octave \(\lambda= \{ 5,5\sqrt{2}\}\). These scales ensure that the Gabor function yields an adequate number of visible parallel excitatory and inhibitory stripe zones. Dependent on the patch size and the nature of the data at hand different scales might be more suitable. We set the phase offset ϕ=0 and the aspect ratio γ=1 for all filters. In this way we create center-on symmetric filters with circular support.

We run the localized version of CIA-LVQ with matrices Ω k initialized with the identity matrix and 4 prototypes per class for E=300 epochs. The prototypes are initialized as the mean of the corresponding class. Regarding VisTex the training error is 5.75 % and the error on the test set 15 %. For the KTH-TIPS data set CIA-LVQ reaches training and test errors of 15.4 % and 22.8 % respectively.

We use the same data sets and the same filter banks to compare with the Gabor-based Opponent Color Features (OCF) [15], the Color Local Binary Patterns (Color LBP) [28] and the common approach of deriving textural information only from the luminance plane of images [5]. The luminance approach is considered to often outperform combined color and texture features [19]. We implement this approach with a RGB to gray (RGB2G) transformation, which builds intensity values by a weighted sum of the color components of every pixel:

$$ I_{(x,y)} = 0.2989 \cdot R_{(x,y)} + 0.587 \cdot G_{(x,y)} + 0.114 \cdot B_{(x,y)} $$
(24)

We again vectorize all patches s and in this case the image patch descriptor is given by

$$ \mathbf {r}_2(\mathbf {s})=\mathbf {s}*\sum_l \mathbf{G}^l . $$
(25)

For OCF we use a k-nearest neighbors (k-NN) classification scheme with precisely the set of features and the dissimilarity measure suggested by the authors of [15], whereas for the Color LBP we use rotation-invariant uniform LBP histograms in (8,1) neighborhoods and the Euclidean distance in an k-NN scheme. We choose the size of the neighborhood in relation to the patch size and the dimensions of the feature vectors created. With respect to the RGB2G approach we use the k-NN scheme with a dissimilarity measure similar to Eq. (8):

$$ d_\mathbf{G}\bigl(\mathbf {x}^i,\mathbf {x}^j\bigr) = \bigl\lVert\bigl|\mathbf {r}_2\bigl(\mathbf {x}^i\bigr)\bigr|^2 - \bigl|\mathbf {r}_2\bigl(\mathbf {x}^j\bigr)\bigr|^2\bigr\rVert^2 . $$
(26)

Regarding all k-NN schemes we cross-validate the number of nearest neighbors using the values k=1,3,…,15 on the testing image patches from the training subsets. The optimal k obtained is then used for experimenting on the previously unseen evaluation image patches. Ties are solved by defaulting to the 1-NN classifier.

5.1 Comparisons on the VisTex Data Set

The k-NN scheme shows a test error of 9.1 % based on the OCF (k=3), 2 % based on the Color LBP (k=1) and 25.8 % based on the RGB2G transformation (k=1), but the most interesting comparison relies on the evaluation set which displays the generalization ability of each method. Here the k-NN scheme produces much higher error rates of 35.2 %, 25.2 % and 50 % for OCF, Color LBP and RGB2G respectively, while the CIA-LVQ has an error of 13.1 %, in the same order of magnitude as for the test patches. Table 1 presents in detail the confusion matrices and classwise accuracies of all methods for the evaluation set.

Table 1 Confusion matrices for the VisTex evaluation set

CIA-LVQ consistently outperforms all other methods and displays remarkable ability to generalize for previously unknown data. The magnitude of the prototypes, which classify the evaluation set are shown in Fig. 5. Additionally we show some example patches from the evaluation set, which are classified correctly together with their descriptors in Fig. 6 and some examples of wrongly classified patches in Fig. 7. Finally, Fig. 8 depicts in the spatial domain the optimized sums of kernels that are used together with the corresponding prototypes in order to classify the evaluation set. The accuracy rates of the proposed approach don’t vary a lot among the different classes. However, Brick and Food are the most difficult to classify using a small patch size due to the large size of the texture patterns and the possibly low contrast respectively. Therefore, Color LBP being invariant to monotonic contrast changes outperforms CIA-LVQ for the class Food.

Fig. 5
figure 5

Plots of the optimized prototypes |(w L)| actively used to classify the data in the evaluation set of the VisTex database. The names consist of the corresponding class name and the index number (1–4) of the prototype

Fig. 6
figure 6

Plots of the descriptors |r k | of some correctly classified image patches from the evaluation set of VisTex database

Fig. 7
figure 7

Plots of the descriptors |r k | of some wrongly classified image patches from the evaluation set of VisTex database

Fig. 8
figure 8

Plots of the final form of filter kernels actively used to classify the evaluation set of the VisTex database. The filter kernels have been locally adapted during training. The names consist of the corresponding class name and the index number (1–4) of the kernel

5.2 Comparisons on the KTH-TIPS Database

The k-NN scheme shows a test error of 41.7 % based on the OCF (k=13), 26.4 % based on the Color LBP (k=11) and of 52.7 % based on the RGB2G transformation (k=11), which are all higher than what CIA-LVQ can achieve. On the evaluation set the superior performance of the proposed technique is further clarified. The k-NN scheme reaches error rates of 46.4 %, 35.6 % and 58.4 % for OCF, Color LBP and RGB2G respectively, while the CIA-LVQ has an error of 20.3 %, again in the same order of magnitude as for the test patches. Table 2 presents in detail the confusion matrices and classwise accuracies of all methods for the evaluation set of the KTH-TIPS database.

Table 2 Confusion matrices for the KTH-TIPS evaluation set

CIA-LVQ is outperformed only for the Corduroy class by all three methods that we compare with, while Color LBP achieves better results for Aluminium Foil, Brown Bread and Cotton as well. The prototypes, which classify the evaluation set of KTH-TIPS are shown in Fig. 9, together with examples of correctly (Fig. 10) and wrongly (Fig. 11) classified patches and their corresponding descriptors. The optimized sums of kernels that are used are shown in Fig. 12. The classes Corduroy, Brown Bread and Cotton are both characterized from nuances of brown color and very diverse patterns as well as the class Sponge. Therefore, the former two are often mistaken for one another or Sponge from CIA-LVQ. The same occurs also between Cotton and Linen that are dominated by very similar colors and often low contrast. Finally, the performance of CIA-LVQ with regard to the class Aluminium Foil is mostly due to the combination of large textures and the small patch size.

Fig. 9
figure 9

Plots of the optimized prototypes |(w L)| actively used to classify the data in the evaluation set of the KTH-TIPS database. The names consist of the corresponding class name and the index number (1–4) of the prototype

Fig. 10
figure 10

Plots of the descriptors |r k | of some correctly classified image patches from the evaluation set of KTH-TIPS database

Fig. 11
figure 11

Plots of the descriptors |r k | of some wrongly classified image patches from the evaluation set of KTH-TIPS database

Fig. 12
figure 12

Plots of the final form of filter kernels actively used to classify the evaluation set of the VisTex database. The filter kernels have been locally adapted during training. The names consist of the corresponding class name and the index number (1–4) of the kernel

6 Conclusion and Outlook

In this contribution we propose a prototype based framework for color texture classification. As an example we initialize the system with Gabor filters and classify color texture patterns in 15×15 patches randomly drawn from images of two public data sets. The results show that CIA-LVQ can learn typical texture patterns with very good generalization, even from relatively small patches and filter banks and it consistently outperforms state of the art techniques used for color texture analysis. It is also of conceptual value that this LVQ adaptation is suitable for learning in the complex number domain.

The resulting filter kernels may not strictly conform to the notion of Gabor filters, they preserve however the important property of symmetric and periodic excitatory and inhibitory regions, the shape and size of which are data driven. In principle every adaptive metric method could be extended following our suggestion, but we consciously choose LVQ because of its easily interpretable results and the lower computational costs in comparison to other approaches. Similarly to Gabor filters any other family of 2D filters commonly used to describe gray scale image information could be adapted and applied to color image analysis with this algorithm. Initializing with a filter bank of differences of Gaussians for color edge detection is a possible example. Furthermore, depending on the task at hand it might be desirable that two patches in which the same texture occurs on different positions should not be interpreted as similar. In this case another similarity measure should be used: \(\lVert|\mathbf {r}(\mathbf {x}^{i}) - \mathbf {r}(\mathbf {w}^{L}) |\rVert^{2}\), which is not based on the difference of magnitudes. This might be of advantage for example in the recognition of objects such as traffic signs, were a corner or an edge might have different interpretations dependent on their position in the image. Combinations of CIA-LVQ with keypoint detectors to avoid the drawing of patches from random positions within an image can also be easily implemented and can be beneficial especially for tasks that are related to object recognition. A completely unbiased, regarding the nature of the filters, variant of CIA-LVQ where the adaptive kernels are randomly initialized is also of particular interest mostly in cases where there is no prior knowledge for the nature of the data (i.e. medical imaging).

CIA-LVQ formulates a novel general principle: based on a differentiable convolution and an adaptive filter bank, the algorithm optimizes the classification. Contrary to standard approaches which are either based on a single channel representation of the images through a fixed transformation or empirical observations for combining color and textural information, the proposed technique offers the alternative of data driven learning of suitable, parameterized image descriptors. The ability of automatically weighing different color channels and different filters in localized neighborhoods, according to their importance for the classification task, is the most significant factor which qualifies our approach.