Teacher-Directed Learning with Gaussian and Sigmoid Activation Functions
In this paper, we propose a new computational method for information-theoretic competitive learning that maximizes information about input patterns as well as target patterns. The method has been called teacher-directed learning, because target information directs networks to produce appropriate outputs. In the previous method, we used sigmoidal functions to activate competitive units. We found that the method with the sigmoidal functions could not increase information for some problems. To remedy this shortcoming, we use Gaussian activation functions to simulate competition in the intermediate layer, because changing the width of the functions accelerates information maximization processes. In the output layer, we used the ordinary sigmoid functions to produce outputs. We applied our method to two problems: an artificial data problem and a chemical data problem. In both cases, we could show that information could be significantly increased with the Gaussian functions, and better generalization performance could be obtained.
Unable to display preview. Download preview PDF.