Skip to main content
Log in

A learning algorithm based on primary school teaching wisdom

  • Research Article
  • Published:
Paladyn

Abstract

A learning algorithm based on primary school teaching and learning is presented. The methodology is to continuously evaluate the performance of the network and to train it on the examples for which they repeatedly fail, until all the examples are correctly classified. Empirical analysis on UCI data show that the algorithm produces good training data and improves the generalization ability of the network on unseen data. The algorithm has interesting applications in data mining, model evaluations and rare objects discovery.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. T. Hediger, M. Wann and N.N. Greenbaun. The influence of training sets on generalization in feedforward neural networks. In International Joint Conf. on Neural Networks, 1990.

  2. S.M. Katz. Estimation of probabilities from sparse data for the language model component of a speech recognizer. In IEEE Transactions on Acoustics, Speech and Signal Processing, pages 400–401, 1987.

  3. K. Keeni and H. Shimodaira. On selection of training data for fast learning of neural networks using back propagation. In Artificial Intelligence and Applications, 2002.

  4. S. Vijayakumar, M. Sugiyama, and H. Ogawa. Training data selection for optimal generalization with noise variance reduction in neural networks. Springer-Verlag, 1998.

  5. R. Setiono and H. Liu. Neural network feature selector. IEEE Trans. Neural Networks, 8:654–661, 1997.

    Article  Google Scholar 

  6. M.H. Choueiki and C.A. Mount-Campbell, Training data development with the doptimality criterion. IEEE Transactions on Neural Networks, 10(1):56–63, 1999.

    Article  Google Scholar 

  7. Masashi Sugiyama and Hidemitsu Ogawa. Training data selection for optimal generalization in trigonometric polynomial networks. In Advances in Neural Information Processing Systems, pages 624–630. MIT Press, 2000.

  8. Kazuyuki Hara and Kenji Nakayama. A training method with small computation for classification. In IEEE-INNS-ENNS International Joint Conference, pages 543–548, 2000.

  9. J. Wang, P. Neskovic, and L.N. Leon N Cooper. Training data selection for support vector machines. In ICNC 2005 LNCS. Springer, 2005.

  10. J.R. Cano, F. Herrera, and M. Lozano. On the combination of evolutionary algorithms and stratified strategies for training set selection in data mining. Applied Soft Computing, 2006:323–332, 2006.

  11. C.E. Pedreira. Learning vector quantization with training data selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(1):157–162, 2006.

    Article  Google Scholar 

  12. D.L. Wilson. Asymptotic properties of nearest neighbor rules using edited data sets. IEEE Transactions on Systems, Man and Cybernetics, 1972.

  13. P. Devijver and J. Kittler. On the edited nearest neighbor rule. IEEE 1980 Pattern Recognition, 1:72–80.

  14. E. Garfield. Citation Indexing: its Theory and Application in Science, Technology, and Humanities. John Wiley & Sons, 1979.

  15. C.F. Eick, N. Zeidat, and Z. Zhenghong. Supervised clustering algorithms and benefits. In 16th IEEE International Conference on Tools with Artificial Intelligence (ICTAI04), Boca Raton, Florida, pages 774–776, 2004.

  16. W. Wei, E. Barnard, and M. Fanty. Improved probability estimation with neural network models. In Proceedings of the International Conference on Spoken Language Systems, pages 498–501, 1996.

  17. N.S. Philip and K. Babu Joseph. Boosting the di erences: A fast bayesian classifier neural network. Intelligent Data Analysis, 4:463–473, 2000.

    MATH  Google Scholar 

  18. http://www.iucaa.ernet.in/~nspp/dbnn.html.

  19. A. P. Dawid. Conditional independence in statistical theory. J.R. Statist. Soc. B, 41(1):1–31, 1997.

    MathSciNet  Google Scholar 

  20. A. Moore. A short intro to naive bayesian classifiers. Available at http://www.autonlab.org/tutorials/naive.html.

  21. C. Elkan. Naive bayesian learning. Technical Report CS97-557, Department of Computer Science and Engineering, University of California, San Diego, September 1997. Available as http://www4.cs.umanitoba.ca/~jacky/Teaching/Courses/COMP4360-MachineLearning/Ad-ditional Information/elkan-naivebayesian-learning.pdf.

    Google Scholar 

  22. S.C. Odewahn, S.H. Cohen, R.A. Windhorst, and N.S. Philip. Automated galaxy morphology: A fourier approach. ApJ, 568:539–557, 2002.

    Article  Google Scholar 

  23. N.S. Philip, Yogesh Wadadekar, Ajit Kembhavi, and K. Babu Joseph. A di erence boosting neural network for automated stargalaxy classification. Astronomy and Astrophysics, 385(3):1119–1133, 2002.

    Article  Google Scholar 

  24. S. Goderya, J.D. Andreasen, and N.S. Philip. Advances in automated algorithms for morphological classification of galaxies based on shape features. In Astronomical Data Analysis Software and Systems XIII, volume ASP Conference Series, 314, pages 617–620, 2004.

  25. R. Sinha, N.S. Philip, A. Kembhavi, and A. Mahabal. Photometric classification of quasars from the sloan survey. In IAU Highlights Of Astronomy, volume 14(3), 2006.

  26. http://www.dtreg.com/benchmarks.htm.

  27. Benjamin Quost, Thierry Denoeux, and Mylene Masson. Oneagainst-all classifier combination in the framework of belief functions. In Eighth Conference on Information Fusion Conference, pages 356–363, 2006.

  28. Dan Geiger, Moises Goldszmidt, G. Provan, P. Langley, and P. Smyth. Bayesian network classifiers. In Geiger97bayesiannetwork, pages 131–163, 1997.

  29. SDSS home page is http://www.sdss.org.

  30. Bora A., Gupta R., Singh H.P., Murthy J., Mohan R., and Duorah K. A three-dimensional automated classification scheme for the tauvex data pipeline. MNRAS, 384:827–833, 2008.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ninan Sajeeth Philip.

About this article

Cite this article

Philip, N.S. A learning algorithm based on primary school teaching wisdom. Paladyn 1, 160–168 (2010). https://doi.org/10.2478/s13230-011-0002-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.2478/s13230-011-0002-z

Keywords

Navigation