Classification and feature selection by a self-organizing neural network
This article describes recent improvements of an original neural network building method which could be applied in the particular case of 2 input neurones. After a brief recall of the main building principles of a neural net, authors introduce the capability for a neurone to receive more than 2 inputs. Two problems then arise: how to chose the input number of a neurone, and what becomes of the decision rule of a neurone? Treating these problems leads to an original feature selection method (based on genetic algorithms) and leads to adapt a linear discrimination algorithm to non separable problems. Experimental results for a handwritten digit recognition problem confirm the efficiency of the method.
Unable to display preview. Download preview PDF.
- [BEL91] Belew R.K., Mc Inerney J., Schraudolph N.N., “Evolving network: using the genetic algorithm with connectionnist learning”. In Langton C.G., Taylor C., Farmer, J.D., Rasmussen (Eds.), Artificial life II, SFI studies in the sciences of complexity, Vol. X, Addison-Wesley, Reading, pp. 511–547, 1991.Google Scholar
- [CHE95] Chentouf R., Jutten C., “Ineremental learning with a stopping creterion experimental results”. In J. Mira, F. Sandoval (Eds.), Proceedings of Iwann’95, New-York: Springer-Verlag, New-York: Springer-Verlag, pp. 519–526, 1995.Google Scholar
- [GAL86] Gallant S.I., “Optimal linear discriminants”, In IEEE Proceedings of the 8 th Conference on Pattern Recognition Vol. 2 (pp. 849). New-York: IEEE, 1986.Google Scholar
- [HAS93] Hassibi B., Stork D.G., Wolff G.J., “Optimal brain surgeon”. In Proceedings of the 1993 IEEE International Conference on Neural Networks Vol 1, New-York: IEEE, pp. 293–299. 1993.Google Scholar
- [KNE90] Knerr S., Personnaz, L., Dreyfus G., “Single-layer learning revisited: a stepwise procedure for building and training a neural network”. In F. Fogelman Soulie, J. Herault (Eds.), Neurocomputing, NATO ASI Series, Series F, Vol. 68, New-York: Springer-Verlag, pp. 41–50, 1990.Google Scholar
- [LEC90] LeCun Y. Denker J.S., Solla S.A., “Optimal Brain Damage”, Procedings of the Neural Information Processing System-2. D.S. Touretzky Ed., Morgan-Kaufmann, pp. 598–605, 1990.Google Scholar
- [LIS95] Lis J., “The synthesis of the ranked neural networks applying genetic algorithm with the dynamic probability of mutation”. In J. Mira, F. Sandoval (Eds.), Proceedings of Iwann’95, New-York: Springer-Verlag, pp. 498–504, 1995.Google Scholar
- [RIB97] Ribert A., Stocker E., Lecourtier Y., Ennaji A. “Optimizing a Neural Network Architecture with an Adaptive Parameter Genetic Algorithm”. In Proceedings of Iwann’97, J. Mira, R. Moreno and J. Cabestany Eds, Springer Verlag, Berlin, Vol. 1240, pp. 527–535, 1997.Google Scholar
- [RIB98] Ribert A., “Structuration évolutive de données: application à la construction de classifieurs distribués”, Ph.D. Thesis, University of rouen, France, 1998.Google Scholar
- [ROS60] Rosenblatt F., “Perceptron simulation experiments”, Proceeding of the IRE, 3, 48, 1960.Google Scholar
- [STO95] Stocker E., Lecourtier Y., Ennaji A., “A distributed classifier based on Yprel networks cooperation”. In Proceedings of Iwann’95, pp. 330–337, 1995.Google Scholar