Abstract
A learning algorithm based on primary school teaching and learning is presented. The methodology is to continuously evaluate the performance of the network and to train it on the examples for which they repeatedly fail, until all the examples are correctly classified. Empirical analysis on UCI data show that the algorithm produces good training data and improves the generalization ability of the network on unseen data. The algorithm has interesting applications in data mining, model evaluations and rare objects discovery.
Similar content being viewed by others
References
T. Hediger, M. Wann and N.N. Greenbaun. The influence of training sets on generalization in feedforward neural networks. In International Joint Conf. on Neural Networks, 1990.
S.M. Katz. Estimation of probabilities from sparse data for the language model component of a speech recognizer. In IEEE Transactions on Acoustics, Speech and Signal Processing, pages 400–401, 1987.
K. Keeni and H. Shimodaira. On selection of training data for fast learning of neural networks using back propagation. In Artificial Intelligence and Applications, 2002.
S. Vijayakumar, M. Sugiyama, and H. Ogawa. Training data selection for optimal generalization with noise variance reduction in neural networks. Springer-Verlag, 1998.
R. Setiono and H. Liu. Neural network feature selector. IEEE Trans. Neural Networks, 8:654–661, 1997.
M.H. Choueiki and C.A. Mount-Campbell, Training data development with the doptimality criterion. IEEE Transactions on Neural Networks, 10(1):56–63, 1999.
Masashi Sugiyama and Hidemitsu Ogawa. Training data selection for optimal generalization in trigonometric polynomial networks. In Advances in Neural Information Processing Systems, pages 624–630. MIT Press, 2000.
Kazuyuki Hara and Kenji Nakayama. A training method with small computation for classification. In IEEE-INNS-ENNS International Joint Conference, pages 543–548, 2000.
J. Wang, P. Neskovic, and L.N. Leon N Cooper. Training data selection for support vector machines. In ICNC 2005 LNCS. Springer, 2005.
J.R. Cano, F. Herrera, and M. Lozano. On the combination of evolutionary algorithms and stratified strategies for training set selection in data mining. Applied Soft Computing, 2006:323–332, 2006.
C.E. Pedreira. Learning vector quantization with training data selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(1):157–162, 2006.
D.L. Wilson. Asymptotic properties of nearest neighbor rules using edited data sets. IEEE Transactions on Systems, Man and Cybernetics, 1972.
P. Devijver and J. Kittler. On the edited nearest neighbor rule. IEEE 1980 Pattern Recognition, 1:72–80.
E. Garfield. Citation Indexing: its Theory and Application in Science, Technology, and Humanities. John Wiley & Sons, 1979.
C.F. Eick, N. Zeidat, and Z. Zhenghong. Supervised clustering algorithms and benefits. In 16th IEEE International Conference on Tools with Artificial Intelligence (ICTAI04), Boca Raton, Florida, pages 774–776, 2004.
W. Wei, E. Barnard, and M. Fanty. Improved probability estimation with neural network models. In Proceedings of the International Conference on Spoken Language Systems, pages 498–501, 1996.
N.S. Philip and K. Babu Joseph. Boosting the di erences: A fast bayesian classifier neural network. Intelligent Data Analysis, 4:463–473, 2000.
A. P. Dawid. Conditional independence in statistical theory. J.R. Statist. Soc. B, 41(1):1–31, 1997.
A. Moore. A short intro to naive bayesian classifiers. Available at http://www.autonlab.org/tutorials/naive.html.
C. Elkan. Naive bayesian learning. Technical Report CS97-557, Department of Computer Science and Engineering, University of California, San Diego, September 1997. Available as http://www4.cs.umanitoba.ca/~jacky/Teaching/Courses/COMP4360-MachineLearning/Ad-ditional Information/elkan-naivebayesian-learning.pdf.
S.C. Odewahn, S.H. Cohen, R.A. Windhorst, and N.S. Philip. Automated galaxy morphology: A fourier approach. ApJ, 568:539–557, 2002.
N.S. Philip, Yogesh Wadadekar, Ajit Kembhavi, and K. Babu Joseph. A di erence boosting neural network for automated stargalaxy classification. Astronomy and Astrophysics, 385(3):1119–1133, 2002.
S. Goderya, J.D. Andreasen, and N.S. Philip. Advances in automated algorithms for morphological classification of galaxies based on shape features. In Astronomical Data Analysis Software and Systems XIII, volume ASP Conference Series, 314, pages 617–620, 2004.
R. Sinha, N.S. Philip, A. Kembhavi, and A. Mahabal. Photometric classification of quasars from the sloan survey. In IAU Highlights Of Astronomy, volume 14(3), 2006.
Benjamin Quost, Thierry Denoeux, and Mylene Masson. Oneagainst-all classifier combination in the framework of belief functions. In Eighth Conference on Information Fusion Conference, pages 356–363, 2006.
Dan Geiger, Moises Goldszmidt, G. Provan, P. Langley, and P. Smyth. Bayesian network classifiers. In Geiger97bayesiannetwork, pages 131–163, 1997.
SDSS home page is http://www.sdss.org.
Bora A., Gupta R., Singh H.P., Murthy J., Mohan R., and Duorah K. A three-dimensional automated classification scheme for the tauvex data pipeline. MNRAS, 384:827–833, 2008.
Author information
Authors and Affiliations
Corresponding author
About this article
Cite this article
Philip, N.S. A learning algorithm based on primary school teaching wisdom. Paladyn 1, 160–168 (2010). https://doi.org/10.2478/s13230-011-0002-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.2478/s13230-011-0002-z