Abstract
An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is implemented for the approximation of smooth batch data containing input-output of the hidden neurons and the final neural output of the network. The training set is associated with the adjustable parameters of the network by weight equations which may be compatible or incompatible. Then in case the nonlinear and linear weight equations are compatible we obtain the exact solutions of these equations. Otherwise, we get the unique approximate solution with minimal norm such that the norm of the difference between the left and right handsides of these equations reaches the minimal value. This approach allows us to find a novel adaptive learning rate. Using the multi-agent system as the different kinds of energies for the plant growth and the multi-agent system as concentrations of different substances in the chemical reaction of higher order, one can predict the height of the plant and the concentrations of the substances respectively.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Iranmanesh, S.: A differential adaptive learning rate method for back-propagation neural networks. In: Proceeding of the 10th WSEAS International Conference on Neural Networks, Stevens Point, Wisconsin, USA, pp. 30–34 (2009)
Subavathi, S.J., Kathirvalavakumar, T.: Adaptive modified backpropagation algorithm based on differential errors. International Journal of Computer Science, Engineering and Applications (IJCSEA) 1(5), 21–24 (2011)
Ferrari, S., Stengel, R.F.: Smooth Function Approximation Using Neural Networks. IEEE Trans. Neural Netw. 16(1), 24–38 (2005)
Rumelhart, D., Inton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)
Wolfe, P.H.: Convergence conditions for ascend methods. SIAM Review 11, 226–235 (1969)
Polak, E.: Optimization: Algorithms and Consistent Approximations. Springer (1997)
Jacobs, R.A.: Increased rates of convergence through learning rate adaptation. Neural Netw. 1(4), 295–308 (1988)
Rigler, A.K., Irvine, J.M., Vogl, T.P.: Rescaling of variables in back-propagation learning. Neural Netw. 3(5), 561–573 (1990)
Kolmogorov, A.N.: On the representation of continuous function of several variables by superposition of continuous functions of one variable and addition. Dokl. Akad. Nauk SSSR 114, 953–956 (1957)
Vassiliev, F.L.: Numerical Methods for the optimization problems, Nauk, Moscow (1988) (in Russian)
Beklemichev, D.: Cours de géometrie analytique et d’algèbre linéaire. Editions Mir, Moscou (1988)
Bavrine, I.I.: High mathematics. Instruction, Moscow (1980) (in Russian)
Dagba, T.K., Adanhounmè, V., Adédjouma, S.A.: Modélisation de la croissance des plantes par la méthode d’apprentissage supervisé du neurone. In: Premier colloque de l’UAC des sciences, cultures et technologies, mathématiques, Abomey-Calavi, Benin, pp. 245–250 (2007)
Dembelé, J.-M., Cambier, C.: Modélisation multi-agents de systèmes physiques: application à l’érosion cotière. In: CARI 2006, Cotonou, Benin, pp. 223–230 (2006)
Fourcaud, T.: Analyse du comportement mécanique d’une plante en croissance par la méthode des éléments finis. PhD thesis, Université de Bordeaux 1, Talence, France (1995)
De Reffye, P., Edelin, C., Jaeger, M.: La modélisation de la croissance des plantes. La Recherche 20(207), 158–168 (1989)
Rostand-Mathieu, A.: Essai sur la modélisation des interactions entre la croissance et le développement d’une plante, cas du modèle greenlab. Ph.D thesis, Ecole Centrale de Paris, France (2006)
Wu, L., Le Dimet, F.-X., De Reffye, P., Hu, B.-G.: A new Mathematical Formulation for Plant Structure Dynamics. In: CARI 2006, Cotonou, Benin, pp. 353–360 (2006)
Deng, C., Xiong, F., Tan, Y., He, Z.: Sequential learning neural network and its application in agriculture. In: IEEE International Joint Conference on Neural Networks, vol. 1, pp. 221–225 (1998)
Dagba, T.K., Adanhounmè, V., Adédjouma, S.A.: Neural Networks for Solving the Superposition Problem Using Approximation Method and Adaptive Learning Rate. In: Jędrzejowicz, P., Nguyen, N.T., Howlet, R.J., Jain, L.C. (eds.) KES-AMSTA 2010, Part II. LNCS (LNAI), vol. 6071, pp. 92–99. Springer, Heidelberg (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Adanhounmè, V., Dagba, T.K., Adédjouma, S.A. (2012). Neural Smooth Function Approximation and Prediction with Adaptive Learning Rate. In: Nguyen, N.T. (eds) Transactions on Computational Collective Intelligence VII. Lecture Notes in Computer Science, vol 7270. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32066-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-642-32066-8_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-32065-1
Online ISBN: 978-3-642-32066-8
eBook Packages: Computer ScienceComputer Science (R0)