Background

Membrane proteins account for roughly one third of all proteins and play a crucial role in processes such as cell-to-cell signaling, transport of ions across membranes, and energy metabolism [13], and are a prime target for therapeutic drugs [2, 46]. One important subfamily of membrane proteins are the transmembrane proteins, of which there are two main types:

  • α-helical proteins, in which the membrane-spanning regions are made up of α-helices, and

  • β-barrel proteins, in which the membrane-spanning regions are made up of β-strands.

β-barrel proteins are found mainly in the outer membrane of gram-negative bacteria, and possibly in eukaryotic organelles such as mitochondria, whereas α-helical proteins are found in eukaryotes and the inner membranes of bacteria [7].

Given the obvious biological and medical significance of transmembrane proteins, it is of tremendous practical importance to identify the location of transmembrane segments. There are difficulties with obtaining the three dimensional structure of membrane proteins using experimental techniques:

  • Membrane proteins have both a hydrophilic part and a hydrophobic part, and hence are not entirely soluble in either aqueous or organic solvents; this makes them difficult to crystallize, and hence difficult to analyze using X-ray crystallography, which requires crystallization of the sample.

  • Membrane proteins tend to denature upon removal from the membrane, making their three-dimensional structure difficult to analyze.

The difficulty of inferring the secondary or tertiary structure of transmembrane proteins using experimental techniques has led to a surge of interest in applying techniques from machine learning and bioinformatics to infer secondary structure from primary structure in these proteins. These include discriminant analysis [8], decision trees [9], neural networks [1013], support vector machines [1418], and hidden Markov models [19, 20].

Another interesting class of proteins are the intrinsically unstructured proteins, proteins that need not be folded into a particular configuration to carry out their function, existing instead as dynamic ensembles in their native state [2124]. Intrinsically unstructured proteins have been associated with a wide range of functions including molecular recognition, molecular assembly/disassembly and protein modification [21, 22, 25].

We are interested in investigating the physicochemical properties of various classes of protein segments. In particular, we are interested in determining which properties are useful for discriminating transmembrane segments from non-transmembrane segments in transmembrane proteins, and for discriminating intrinsically unstructured segments from intrinsically structured segments in transmembrane proteins. We are further interested in any similarities or differences in physicochemical properties across these four classes of segments. We will then apply the results of this analysis to construct classifiers to discriminate transmembrane from non-transmembrane segments in transmembrane proteins.

Results and discussion

Physicochemical properties

We are interested in determining which physicochemical properties are most useful for discriminating transmembrane segments from non-transmembrane segments in transmembrane proteins, and for discriminating intrinsically unstructured segments from intrinsically structured segments in transmembrane proteins. We are further interested in any similarities or differences in physicochemical properties across these four classes of segments.

Certain properties, such as hydropathy and polarity, can be measured in different ways; this results in different scales. We are also interested in determining which scales are the most effective in discriminating transmembrane segments from non-transmembrane segments, and in discriminating intrinsically unstructured from intrinsically structured segments in transmembrane proteins.

Our interest is in properties that can be easily computed given only a sequence of amino acids; we therefore considered properties that depend only on the type of each amino acid in a sequence, including:

  • Hydropathy, a measure of the relative hydrophobicity of an amino acid. There are four hydropathy scales in common use – the Kyte-Doolittle [26], Eisenberg-Schwarz-Komaromy-Wall [27], Engelman-Steitz-Goldman [28], and Liu-Deber [29] scales.

  • Polarity, a measure of how charge is distributed over an amino acid, affects how amino acids interact, and helps to determine protein structure. There are two polarity scales in common use—the Grantham [30] and the Zimmerman-Eleizer-Simha [31] scales.

  • Flexibility, a measure of the amount to which an amino acid residue contributes to the flexibility of a protein.

  • Polarizability, a measure of the extent to which positive and negative charge can be separated in the presence of an applied electric field.

  • van der Waals volume, a measure of the volume occupied by an amino acid.

  • Bulkiness, a measure of the volume occupied by an amino acid, is correlated with hydrophobicity [32].

  • Electronic effects, a measure that takes into account steric factors, inductive effects, resonance effects, and field effects [33].

  • Helicity, the propensity of an amino acid to contribute to the formation of helical structures in proteins [34].

Given a sequence of amino acids, the “pointwise” property value associated to a particular position in the sequence depends only on which of the 20 amino acids occurs at that position. To increase the robustness of our results, we work with average property values instead of pointwise property values. The average of a given property associated to a particular amino acid A in the sequence is the average of the pointwise property values associated to the amino acids contained in a window of length L centered at A. The effectiveness of each property at discriminating transmembrane from non-transmembrane segments and intrinsically unstructured from intrinsically structured segments was assessed based on two criteria:

(1) For a given property X, the degree to which the class-conditional distributions for the two classes overlap, that is, the degree to which p X (x|class 1) and p X (x|class 2) overlap. The less these two probability distributions overlap, the more easily the two classes can be separated. Knowledge of these probability distributions forms the basis for a Bayesian classifier, which classifies an instance having a value x for property X to “class 1” if and only if

p x ( x | c l a s s 1 ) p x ( x | c l a s s 2 ) > P { c l a s s 2 } P { c l a s s 1 } MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajuaGdaWcaaGcbaqcLbuacqWGWbaCcqWG4baEcWaGmkikaGIamaiJdIha4jadaYOG8baFryWrL9MCNLwyaGGbaiacmY4FJbGaiWiJ=XgacGaJm+xyaiacmY4FZbGaiWiJ=nhacGaJm+hiaiadmYiIXaqmcWaGmkykaKcakeaajugqbiabdchaWjabdIha4jabcIcaOiabdIha4jabcYha8jaa=ngacaWFSbGaa8xyaiaa=nhacaWFZbGaa8hiaiaa=jdacqGGPaqkaaGaeyOpa4tcfa4aaSaaaOqaaKqzafGamaiJdcfaqjadaYOG7bWEcGaGy+3yaiacaI5FSbGaiaiM=fgacGaGy+3CaiacaI5FZbGaiaiM=bcacGaGm+NmaiadaYOG9bqFaOqaaKqzafGaemiuaaLaei4EaSNaiaiJ=ngacGaGm+hBaiacaY4FHbGaiaiJ=nhacGaGm+3CaiacaY4FGaGaa8xmaiabc2ha9baaaaa@9263@

where P{class 1} is the probability of observing a class 1 instance and P{class 2} is the probability of observing a class 2 instance. The class-conditional probability distributions for the above properties are plotted in Figures 1,2,3.

(2) The Overlap Ratio, defined in the Methods section, is a numerical measure of the overlap between the conditional probabilities P{class 1|X = x} and P{class 2|X = x}. The smaller the Overlap Ratio, the more easily the two classes can be discriminated.

The Overlap Ratios for discriminating transmembrane from non-transmembrane segments are shown in Table 1, while the Overlap Ratios for discriminating intrinsically unstructured from intrinsically structured segments are shown in Table 2. It turns out that the discriminating power of a given property depends on the length L of the window over which property values are averaged; Overlap Ratios are given in Tables 1 and 2 for all odd values of the window length L between 9 and 31.

Figure 1
figure 1

Conditional probability distributions p(x|TM), p(x|Non-TM) (on the left), and p(x|IU), p(x|Non-IU) (on the right), where x is hydropathy, as determined by the Kyte-Doolittle, Eisenberg-Schwarz- Komaromy-Wall, Engelman-Steitz-Goldman, and Liu-Deber scales. TM = transmembrane, IU = intrinsi-cally unstructured. The plots on the left were reproduced with permission from [38].

Figure 2
figure 2

Conditional probability distributions p(x|TM), p(x|Non-TM) (on the left), and p(x|IU), p(x|Non-IU) (on the right), where x is, from top to bottom, polarity, as determined by the Grantham and Zimmerman-Eleizer-Simha scales, bulkiness, and flexibility. TM = transmembrane, IU = intrinsically unstructured. The plots on the left were reproduced with permission from [38].

Figure 3
figure 3

Conditional probability distributions p(x|TM), p(x|Non-TM) (on the left), and p(x|IU), p(x|Non-IU) (on the right), where x is, from top to bottom, van der Waals volume, polarizability, elec-tronic effects, and helicity. TM = transmembrane, IU = intrinsically unstructured. The plots on the left were reproduced with permission from [38].

Table 1 Overlap Ratios for discriminating transmembrane segments from non-transmembrane segments in membrane proteins as a function of window length (W.L.).
Table 2 Overlap Ratios for discriminating intrinsically unstructured segments from intrinsically structured segments in membrane proteins as a function of window length (W.L.).

Our conclusions were as follows:

  • Whereas all four hydropathy scales can be used for discriminating transmembrane segments for non-transmembrane segments in transmembrane proteins, the Liu-Deber scale is the best scale for this task.

  • Whereas all four hydropathy scales can be used for discriminating intrinsically unstructured segments from intrinsically structured segments in transmembrane proteins, the Eisenberg-Schwarz-Komaromy-Wall scale is the best scale for this task.

  • Whereas both polarity scales can be used for discriminating transmembrane from non-transmembrane segments and for discriminating intrinsically unstructured from intrinsically structured segments in transmembrane proteins, the Grantham scale is slightly better for these tasks.

  • For both classification problems (discriminating transmembrane from non-transmembrane segments and discriminating intrinsically unstructured from intrinsically structured segments), flexibility provided some degree of discriminating power, and bulkiness provided still less; neither property was as effective as hydropathy or polarity at discriminating between the two classes.

  • For both classification problems, polarizability, van der Waals volume, electronic effects, and helicity did not discriminate well between the two classes.

Transmembrane segment classifiers

We tested four classification techniques on the problem of discriminating transmembrane segments from non-transmembrane segments in transmembrane proteins:

  • C4.5 [35], a decision tree algorithm.

  • SVMlight version 6.01 (linear kernel function) [36], a support vector machine algorithm.

  • Two variants of the Self-Organizing Global Ranking (SOGR) algorithm [37], SOGR-I [38, 39] and SOGR-IB [38, 39], which are described in detail in the Methods section. These algorithms depend on a number of parameters: the length L of the window used to extract features, the number of neurons m, the learning rate η t , and the neighborhood size R. The performance of these algorithms depends on the choice of these parameters: For example, the performance of the SOGR-I algorithm as a function of the length of the window used to extract features is shown in Figure 4. Based on a series of experiments, we settled on feature window length L of 10, a network size m of 16 neurons, a fixed learning rate η t of .05, and a neighborhood size R of 2. Since the length of the window used to extract features was chosen to maximize the performance of the SOGR-I algorithm, the results will be slightly biased in favor of the SOGR-I and SOGR-IB algorithms.

Figure 4
figure 4

Performance of the SOGR-I classifier as a function of the length of the window used to extract features, based on threefold cross-validation (fixed learning rate η t = .05, neighborhood size R = 2, number of neurons = 16). Reproduced with permission from [38].

Designing a classifier also involves selecting the features that are most useful for the problem of interest. Based on our investigations of physicochemical properties, we based the classification on three features:

  • Hydropathy (Liu-Deber scale)

  • Polarity (Grantham scale)

  • Flexibility

The performance of the above four classification techniques under ten-fold cross-validation when hydropathy (Liu-Deber scale), polarity (Grantham scale), and flexibility are used as features is shown in Table 3, while the performance when only polarity (Grantham scale) and flexibility are used as features is shown in Table 4. It is interesting that performance drops only slightly when two features are used instead of three. All four classifiers exhibited good performance, with out-of-sample accuracies of approximately 75%. While this may seem low, the substantial overlap of the transmembrane and non-transmembrane classes seen in Figures 1,2,3 makes this a nontrivial classification problem. Filtering strategies can be used to improve the performance of these classifiers [38, 39].

Table 3 Accuracy of discriminating transmembrane segments from non-transmembrane segments in trans-membrane proteins using the SOGR-I and SOGR-IB classifiers, a decision tree classifier (C4.5), and a support vector machine classifier (SVMlight version 6.01), based on ten-fold cross-validation. Three features were used, namely hydropathy (Liu-Deber scale), polarity (Grantham scale), and flexibility.
Table 4 Accuracy of discriminating transmembrane segments from non-transmembrane segments in trans-membrane proteins using the SOGR-I and SOGR-IB classifiers, a decision tree classifier (C4.5), and a support vector machine classifier (SVMlight version 6.01), based on ten-fold cross-validation. Two features were used, namely polarity (Grantham scale) and flexibility.

Conclusions

We determined that the most useful properties for discriminating transmembrane segments from non-transmembrane segments and for discriminating intrinsically unstructured segments from intrinsically structured segments in transmembrane proteins were hydropathy, polarity, and flexibility, and based on these properties, constructed a number of classifiers to identify transmembrane segments with an out-of-sample accuracy of approximately 75%. Several interesting observations emerged from our study:

  • Intrinsically unstructured segments and transmembrane segments tend to have opposite properties, as summarized in Table 5. For example, unstructured segments tended to have a low hydropathy value, whereas transmembrane segments tended to have a high hydropathy value. These results are in agreement with previous work that found that transmembrane segments tend to be more hydrophobic than non-transmembrane segments, due to the fact that transmembrane α-helices require a stretch of 12-35 hydrophobic amino acids to span the hydrophobic region inside the membrane [26].

Table 5 Tendencies of various properties for tranmembrane (TM) and intrinsically unstructured (IU) segments.
  • Transmembrane proteins appear to be much richer in intrinsically unstructured segments than other proteins; about 70% of transmembrane proteins contain intrinsically unstructured regions, as compared to about 35% of other proteins.

  • In approximately 70% of transmembrane proteins that contain intrinsically unstructured segments, the intrinsically unstructured segments are close to transmembrane segments.

These observations may provide insight into the structural and functional roles that intrinsically unstructured segments play in membrane proteins, and may also aid in the identification of intrinsically unstructured and transmembrane segments from primary protein structure.

Methods

Physicochemical properties

The Overlap Ratio, a quantitative measure of how well two classes (referred to generically as “class 1” and “class 2”) can be discriminated based on a property X, was calculated as follows.

  1. 1.

    We construct a graph such that:

  2. (a)

    The horizontal axis corresponds to the property X. We divide this axis into bins.

  3. (b)

    The y-value associated with the bin corresponding to X values between x and x + ∈ is the fraction of all instances in the training set that belong to class 1 and have a value for the feature X in the range [x, x + ∈), where ∈ > 0 is small.

The graph represents an approximation to the function P{class 1|X = x}. We define the complementary function P{class 2|X = x}using

P { class  2 | X = x } = 1 P { class  1 | X = x } MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiabdcfaqjabcUha7jabbogaJjabbYgaSjabbggaHjabbohaZjabbohaZjabbccaGiabikdaYiabcYha8jabdIfayjabg2da9iabdIha4jabc2ha9jabg2da9iabigdaXiabgkHiTiabdcfaqjabcUha7jabbogaJjabbYgaSjabbggaHjabbohaZjabbohaZjabbccaGiabigdaXiabcYha8jabdIfayjabg2da9iabdIha4jabc2ha9baa@649E@
  1. 2.

    Let

    f 1 ( x ) P { class  1 | X = x } f 2 ( x ) P { class 2 | X = x } MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakqaabeqaaKqzafGaemOzaywcfa4aaSbaaSqaaKqzafGaeGymaedaleqaaKqzafGaeiikaGIaemiEaGNaeiykaKIaeyyyIORaeeiuaaLaei4EaSNaee4yamMaeeiBaWMaeeyyaeMaee4CamNaee4CamNaeeiiaaIaeGymaeJaeiiFaWNaemiwaGLaeyypa0JaemiEaGNaeiyFa0hakeaajugqbiabdAgaMLqbaoaaBaaaleaajugqbiabikdaYaWcbeaajugqbiabcIcaOiabdIha4jabcMcaPiabggMi6kabbcfaqjabcUha7jabbogaJjabbYgaSjabbggaHjabbohaZjabbohaZjabbccaGiabbkdaYiabcYha8jabdIfayjabg2da9iabdIha4jabc2ha9baaaa@752A@

Then the Overlap Ratio is then defined as:

overlap Ratio = Area under both  f 1 ( x )  and  f 2 ( x ) Area under   f 1 ( x ) +  Area under  f 2 ( x ) MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiabb+gaVjabbAha2jabbwgaLjabbkhaYjabbYgaSjabbggaHjabbchaWjabbccaGiabbkfasjabbggaHjabbsha0jabbMgaPjabb+gaVjabg2da9KqbaoaalaaakeaajugqbiabbgeabjabbkhaYjabbwgaLjabbggaHjabbccaGiabbwha1jabb6gaUjabbsgaKjabbwgaLjabbkhaYjabbccaGiabbkgaIjabb+gaVjabbsha0jabbIgaOjabbccaGiabdAgaMLqbaoaaBaaaleaajugqbiabigdaXaWcbeaajugqbiabcIcaOiabdIha4jabcMcaPiabbccaGiabbggaHjabb6gaUjabbsgaKjabbccaGiabdAgaMLqbaoaaBaaaleaajugqbiabikdaYaWcbeaajugqbiabcIcaOiabdIha4jabcMcaPaGcbaqcLbuacWaJagyqaeKamWiGbkhaYjadmcyGLbqzcWaJagyyaeMamWiGbccaGiadmcyG1bqDcWaJagOBa4MamWiGbsgaKjadmcyGLbqzcWaJagOCaiNaeeiiaaIaeeiiaaIaemOzaywcfa4aaSbaaSqaaKqzafGaeGymaedaleqaaKqzafGaeiikaGIaemiEaGNaeiykaKIaey4kaSIaeeiiaaIaeeyqaeKaeeOCaiNaeeyzauMaeeyyaeMaeeiiaaIaeeyDauNaeeOBa4MaeeizaqMaeeyzauMaeeOCaiNaeeiiaaIaemOzaywcfa4aaSbaaSqaaKqzafGaeGOmaidaleqaaKqzafGaeiikaGIaemiEaGNaeiykaKcaaaaa@B1CB@

The smaller the Overlap Ratio, the more easily the two classes can be discriminated.

The SOGR-I and SOGR-IB classification algorithms

Overview

The Self-Organizing Global Ranking (SOGR) algorithm [37] was inspired by Kohonen's Self-Organizing Map (SOM) algorithm [40]. In the SOM algorithm, each neuron has associated with it a topological neighborhood, and the algorithm is such that neighboring neurons in the topological space tend to arrange themselves over time into a grid in feature space that mimics the neighborhood structure in the topological space. The SOGR algorithm differs from the SOM algorithm by dropping the topological neighborhood and replacing it with the concept of a global neighborhood generated by ranking. We consider two variants of the SOGR algorithm:

  • The first variant, SOGR-I [38, 39], modifies the initialization scheme of SOGR.

  • The second variant, SOGR-IB [38, 39] (“B” stands for “Batch update”), removes the dependence on the order in which instances are presented by only updating the weights after each cycle, where a cycle involves presenting the entire training set to the network, one instance at a time. This variant also uses the modified initialization procedure described above.

Before we describe the above modifications in detail, we describe the SOGR algorithm itself.

The SOGR classification algorithm

We assume that m neurons are used; each neuron j has a weight vector W j MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdEfaxzaalaqcfa4aaSbaaSqaaKqzafGaemOAaOgaleqaaaaa@4205@ (t), where t represents time. Let the initial position of neuron j at time t = 0 be W j MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdEfaxzaalaqcfa4aaSbaaSqaaKqzafGaemOAaOgaleqaaaaa@4205@ (0), and assume that the training set consists of instances ( x i MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdIha4zaalaqcfa4aaSbaaSqaaKqzafGaemyAaKgaleqaaaaa@4245@ , y i ), i = 1, … , n, where the x i MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdIha4zaalaqcfa4aaSbaaSqaaKqzafGaemyAaKgaleqaaaaa@4245@ are feature vectors, and y i denotes the class of an instance.

  1. 1.

    Initialization: Choose initial positions W j MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdEfaxzaalaqcfa4aaSbaaSqaaKqzafGaemOAaOgaleqaaaaa@4205@ (0) in feature space for the m neurons by assigning the neurons random positions in feature space.

  2. 2.

    Present the instances in the training set to the network, one at a time. As each instance is presented to the network, the time index t is increased by 1. For each instance ( x i MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdIha4zaalaqcfa4aaSbaaSqaaKqzafGaemyAaKgaleqaaaaa@4245@ , y i ) in the training set, the positions of one or more neurons are adjusted as follows:

  • Identifying Winning Neurons: Find the R closest neurons to the feature vector x i MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdIha4zaalaqcfa4aaSbaaSqaaKqzafGaemyAaKgaleqaaaaa@4245@ , that is, find the R neurons with the smallest value of x i W j ( t ) MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXafv3ySLgzGmvETj2BSbqeeuuDJXwAKbsr4rNCHbGeaGqipu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaaeaabaWaaaGcbaqcLbuacqWILicucuWG4baEgaWcaKqbaoaaBaaaleaajugqbiabdMgaPbWcbeaajugqbiaaysW7cqGHsislcaaMe8Uafm4vaCLbaSaajuaGdaWgaaWcbaqcLbuacqWGQbGAaSqabaqcLbuacqGGOaakcqWG0baDcqGGPaqkcqWILicuaaa@43FA@ . These R neurons constitute the “neighborhood” of the input vector. Let Γ be the set of indices of the R winning neurons.

  • Updating Weights: Adjust the positions of each of the R winning neurons using the update rule

    W j ( t + 1 ) = W j ( t ) + η t ( x i W j ( t ) ) MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdEfaxzaalaqcfa4aaSbaaSqaaKqzafGaemOAaOgaleqaaKqzafGaeiikaGIaemiDaqNaey4kaSIaeGymaeJaeiykaKIaeyypa0Jafm4vaCLbaSaajuaGdaWgaaWcbaqcLbuacqWGQbGAaSqabaqcLbuacqGGOaakcqWG0baDcqGGPaqkcqGHRaWkcqaH3oaAjuaGdaWgaaWcbaqcLbuacqWG0baDaSqabaqcLbuacqGGOaakcuWG4baEgaWcaKqbaoaaBaaaleaajugqbiabdMgaPbWcbeaajugqbiabgkHiTiqbdEfaxzaalaqcfa4aaSbaaSqaaKqzafGaemOAaOgaleqaaKqzafGaeiikaGIaemiDaqNaeiykaKIaeiykaKcaaa@6651@

where j ∈ Γ and η t is the learning rate. The learning rate is chosen to decrease with time in order to force convergence of the algorithm. In [37] it is suggested that the learning rate be decreased at an exponential rate, and that it should be smaller for larger neighborhood sizes R.

  1. 3.

    Assigning Classes to Neurons: Associated with each neuron j is a count of the number of instances belonging to each class that are closer to neuron j than any other neuron. This count is calculated as follows:

  • For each neuron, initialize the counts to zero.

  • For each instance ( x i MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdIha4zaalaqcfa4aaSbaaSqaaKqzafGaemyAaKgaleqaaaaa@4245@ , y i ) in the training set, find the closest neuron to the feature vector x i MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdIha4zaalaqcfa4aaSbaaSqaaKqzafGaemyAaKgaleqaaaaa@4245@ , that is, find the neuron with the index j*, where

    j * = arg  min j | | x i W j ( t ) | | MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiabdQgaQjabcQcaQiabg2da9iabbggaHjabbkhaYjabbEgaNjabbccaGKqbaoaaxabakeaajugqbiGbc2gaTjabcMgaPjabc6gaUbWcbaqcLbuacqWGQbGAaSqabaqcLbuacqGG8baFcqGG8baFcuWG4baEgaWcaKqbaoaaBaaaleaajugqbiabdMgaPbWcbeaajugqbiabgkHiTiqbdEfaxzaalaqcfa4aaSbaaSqaaKqzafGaemOAaOgaleqaaKqzafGaeiikaGIaemiDaqNaeiykaKIaeiiFaWNaeiiFaWhaaa@6244@

and increment the count in neuron j* corresponding to class y i by 1.

  • After all instances in the training set have been considered, each neuron is assigned to the class corresponding to the largest count for that neuron.

After the training process has been completed, a test instance can be classified by assigning it the class label of the nearest neuron.

The SOGR-I classification algorithm

The first variant, SOGR-I [38, 39], modifies the initialization scheme of SOGR. Specifically, assume that the feature space is d dimensional, so that the feature vectors x i MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdIha4zaalaqcfa4aaSbaaeaajugqbiabdMgaPbqcfayabaaaaa@42BD@ belong to d MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiabl2riHMqbaoaaCaaabeqaaKqzafGaemizaqgaaaaa@420B@ . For each feature k, we find the largest and smallest value of that feature over the entire training set, which are respectively L k and U k :

L k = min i x i k U k = min i x i k MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakqaabeqaaKqzafGaemitaWucfa4aaSbaaSqaaKqzafGaem4AaSgaleqaaKqzafGaeyypa0tcfa4aaCbeaOqaaKqzafGagiyBa0MaeiyAaKMaeiOBa4galeaajugqbiabdMgaPbWcbeaajugqbiabdIha4LqbaoaaBaaaleaajugqbiabdMgaPjabdUgaRbWcbeaaaOqaaKqzafGaemyvauvcfa4aaSbaaSqaaKqzafGaem4AaSgaleqaaKqzafGaeyypa0tcfa4aaCbeaOqaaKqzafGagiyBa0MaeiyAaKMaeiOBa4galeaajugqbiabdMgaPbWcbeaajugqbiabdIha4LqbaoaaBaaaleaajugqbiabdMgaPjabdUgaRbWcbeaaaaaa@6629@

where x ik is the kth element of the feature vector x i MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdIha4zaalaqcfa4aaSbaaSqaaKqzafGaemyAaKgaleqaaaaa@4245@ . Then the initial positions of the m neurons are chosen as:

W j k ( 0 ) = L k + j 1 m 1 ( U k L k ) j = 1 , , m k = 1 , , d MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiabdEfaxLqbaoaaBaaaleaajugqbiabdQgaQjabdUgaRbWcbeaajugqbiabcIcaOiabicdaWiabcMcaPiabg2da9iabdYeamLqbaoaaBaaaleaajugqbiabdUgaRbWcbeaajugqbiabgUcaRKqbaoaalaaakeaajugqbiabdQgaQjabgkHiTiabigdaXaGcbaqcLbuacqWGTbqBcqGHsislcqaIXaqmaaGaeiikaGIaemyvauvcfa4aaSbaaSqaaKqzafGaem4AaSgaleqaaKqzafGaeyOeI0IaemitaWucfa4aaSbaaSqaaKqzafGaem4AaSgaleqaaKqzafGaeiykaKIaaGzbVxaabeqaciaaaOqaaKqzafGaemOAaOMaeyypa0dakeaajugqbiabigdaXiabcYcaSiablAciljabcYcaSiabd2gaTbGcbaqcLbuacqWGRbWAcqGH9aqpaOqaaKqzafGaeGymaeJaeiilaWIaeSOjGSKaeiilaWIaemizaqgaaaaa@754F@

Thus the m neurons are evenly distributed along the line connecting (L1, L2, … L d ) to (U1, U2, … U d ). This approach has several advantages over other initialization methods:

  • It guarantees that the neurons will be in some sense evenly distributed throughout the feature space. Random initialization, on the other hand, does not guarantee this. If one has a large feature space, say of 60 dimensions, and comparatively few neurons, say 50, then with random initialization those neurons will with high probability not be evenly distributed throughout the feature space.

  • Even a small number of neurons can be used to populate the feature space. If we consider an alternate initialization procedure in which one populates the feature space with a d-dimensional grid of neurons, and there are q grid points along each feature space axis, then the total number of neurons required to populate this grid is qd. For example, if q = 3 and the feature space has 60 dimensions, then the number of neurons required is

    q d = 3 60 4.239 × 10 28 MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiabdghaXLqbaoaaCaaaleqabaqcLbuacqWGKbazaaGaeyypa0JaeG4mamtcfa4aaWbaaSqabeaajugqbiabiAda2iabicdaWaaacqGHijYUcqaI0aancqGGUaGlcqaIYaGmcqaIZaWmcqaI5aqocqGHxdaTcqaIXaqmcqaIWaamjuaGdaahaaWcbeqaaKqzafGaeGOmaiJaeGioaGdaaaaa@551D@

which is clearly infeasible.

The SOGR-IB classification algorithm

The second variant, SOGR-IB [38, 39], addresses two problems with the original SOGR algorithm:

  • The SOGR algorithm updates the weights after each new instance is presented to the network; as a result, the neuron trajectories can oscillate wildly.

  • The SOGR algorithm specifies that the learning rate should be decreased during the course of training, for example at an exponential rate. The problem is that if the learning rate is decreased too rapidly, then the neurons may get stuck before they have reached their optimal positions.

SOGR-IB (“B” stands for “Batch update”) addresses these problems in two ways:

  • It uses a “batch update” strategy for updating the positions of the neurons in feature space. This eliminates the dependence of the results on the order in which instances are presented to the network, and also stabilizes the trajectories of the neurons.

  • The batch update strategy allows the use of a fixed, but small, learning rate η t , which eliminates the problem of the weights getting stuck because the learning rate η t was decreased too quickly.

The SOGR-IB algorithm is described below:

  1. 1.

    Initialization: Choose initial positions W j MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXafv3ySLgzGmvETj2BSbqeeuuDJXwAKbsr4rNCHbGeaGqipu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaaeaabaWaaaGcbaqcLbuacuWGxbWvgaWcaKqbaoaaBaaaleaajugqbiabdQgaQbWcbeaaaaa@34C8@ (0) in feature space for the m neurons using the SOGR-I initialization strategy. Set t = 0.

  2. 2.

    Repeat the following until the “energy” defined by

    Q ( t ) = 1 2 n R instances  i neurons  j m i j x i W j ( t ) 2 MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiabdgfarjabcIcaOiabdsha0jabcMcaPiaaysW7cqGH9aqpcaaMe8Ecfa4aaSaaaOqaaKqzafGaeGymaedakeaajugqbiabikdaYiabd6gaUjabdkfasbaacaaMe8Ecfa4aaabCaOqaaaWcbaqcLbuacqqGPbqAcqqGUbGBcqqGZbWCcqqG0baDcqqGHbqycqqGUbGBcqqGJbWycqqGLbqzcqqGZbWCcqqGGaaicqWGPbqAaSqaaaqcLbuacqGHris5aiaaysW7juaGdaaeWbGcbaaaleaajugqbiabb6gaUjabbwgaLjabbwha1jabbkhaYjabb+gaVjabb6gaUjabbohaZjabbccaGiabdQgaQbWcbaaajugqbiabggHiLdGaaGjbVlabd2gaTLqbaoaaBaaaleaajugqbiabdMgaPjabdQgaQbWcbeaajugqbiablwIiqjqbdIha4zaalaqcfa4aaSbaaSqaaKqzafGaemyAaKgaleqaaKqzafGaaGjbVlabgkHiTiaaysW7cuWGxbWvgaWcaKqbaoaaBaaaleaajugqbiabdQgaQbWcbeaajugqbiabcIcaOiabdsha0jabcMcaPiablwIiqLqbaoaaCaaaleqabaqcLbuacqaIYaGmaaaaaa@902F@

does not reach a new minimum over a number of iterations through the training set, where n is the number of training instances, R is the number of neurons neighboring a given training instance that will be updated, and for each instance ( x i MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdIha4zaalaqcfa4aaSbaaSqaaKqzafGaemyAaKgaleqaaaaa@4245@ , y i ) in the training set, m ij = 1 for neurons j that are one of the R closest neurons to the feature vector x i MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdIha4zaalaqcfa4aaSbaaSqaaKqzafGaemyAaKgaleqaaaaa@4245@ , and m ij = 0 for all other neurons j. After each pass through the training set, the time index t is incremented by 1.

  1. (a)

    Let Z j MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdQfaAzaalaqcfa4aaSbaaSqaaKqzafGaemOAaOgaleqaaaaa@420B@ be the “accumulator” corresponding to neuron j. Initialize Z j MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdQfaAzaalaqcfa4aaSbaaSqaaKqzafGaemOAaOgaleqaaaaa@420B@ to 0 for all neurons j.

  2. (b)

    Present the instances ( x i MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdIha4zaalaqcfa4aaSbaaSqaaKqzafGaemyAaKgaleqaaaaa@4245@ , y i ) in the training set to the network, one at a time. After each instance is presented, the “accumulators” are updated as follows:

  • Identifying Winning Neurons: Find the R closest neurons to the feature vector x i MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdIha4zaalaqcfa4aaSbaaSqaaKqzafGaemyAaKgaleqaaaaa@4245@ , that is, find the R neurons with the smallest value of x i W j ( t ) MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXafv3ySLgzGmvETj2BSbqeeuuDJXwAKbsr4rNCHbGeaGqipu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaaeaabaWaaaGcbaqcLbuacqWILicucuWG4baEgaWcaKqbaoaaBaaaleaajugqbiabdMgaPbWcbeaajugqbiaaysW7cqGHsislcaaMe8Uafm4vaCLbaSaajuaGdaWgaaWcbaqcLbuacqWGQbGAaSqabaqcLbuacqGGOaakcqWG0baDcqGGPaqkcqWILicuaaa@43FA@ . These R neurons constitute the “neighborhood” of the input vector. Let Γ be the set of indices of the R winning neurons.

  • Updating Accumulators: Adjust the accumulators corresponding to each of the R closest neurons using the update rule

    Z j = Z j + 1 n R η t ( x i W ( t ) ) MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaajugqbiqbdQfaAzaalaqcfa4aaSbaaSqaaKqzafGaemOAaOgaleqaaKqzafGaaGjbVlabg2da9iaaysW7cuWGAbGwgaWcaKqbaoaaBaaaleaajugqbiabdQgaQbWcbeaajugqbiaaysW7cqGHRaWkcaaMe8Ecfa4aaSaaaOqaaKqzafGaeGymaedakeaajugqbiabd6gaUjabdkfasbaacqaH3oaAjuaGdaWgaaWcbaqcLbuacqWG0baDaSqabaqcLbuacqGGOaakcuWG4baEgaWcaKqbaoaaBaaaleaajugqbiabdMgaPbWcbeaajugqbiabgkHiTiaaysW7cuWGxbWvgaWcaiabcIcaOiabdsha0jabcMcaPiabcMcaPaaa@6818@

where j ∈ Γ and η t is the learning rate.

(c). Updating Neurons: After all instances in the training set have been presented to the network, update the weights for each neuron j using the rule:

W j ( t + 1 ) = W j ( t ) + Z j MathType@MTEF@5@5@+=feaagaart1ev2aqatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBamXvP5wqSXMqHnxAJn0BKvguHDwzZbqegm0B1jxALjhiov2DaeHbuLwBLnhiov2DGi1BTfMBaebbnrfifHhDYfgasaacH8qrps0lbbf9q8WrFfeuY=Hhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfea0=yr0RYxir=Jbba9q8aq0=yq=He9q8qqQ8frFve9Fve9Ff0dmeaabaqaciGacaGaaeqabaWaaqaafaaakeaadaWfGaqaaiabdEfaxbWcbeqaaiabgkziUcaakmaaBaaaleaacqWGQbGAaeqaaOWaaeWaaeaacqWG0baDcqGHRaWkcqaIXaqmaiaawIcacaGLPaaacqGH9aqpdaWfGaqaaiabdEfaxbWcbeqaaiabgkziUcaakmaaBaaaleaacqWGQbGAaeqaaOWaaeWaaeaacqWG0baDaiaawIcacaGLPaaacqGHRaWkdaWfGaqaaiabdQfaAbWcbeqaaiabgkziUcaakmaaBaaaleaacqWGQbGAaeqaaaaa@5604@

where n is the number of instances in the training set.

  1. 3.

    Assigning Classes to Neurons: Same as Step 3 in the SOGR algorithm above.