Advertisement

Accuracy of Neural Network Classifiers as a Property of the Size of the Data Set

  • Patricia S. Crowther
  • Robert J. Cox
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4253)

Abstract

It is well-known that the accuracy of a neural network classifier increases as the number of data points in the training set increases. A previous researcher has proposed a general mathematical model that explains the relationship between training sample size and predictive power. We examine this model using artificially generated data sets containing varying numbers of data points and some real world data sets. We find the model works well when large numbers of data points are available for training, but presents practical difficulties when the amount of available data is small and the data set is difficult to classify.

Keywords

Training Sample Radial Basis Function Neural Network Lichen Planus Resultant Accuracy Neural Network Classifier 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Boonyanunta, N., Zeephongsekul, P.: Predicting the Relationship Between the Size of Training Sample and the Predictive Power of Classifiers. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004. LNCS (LNAI), vol. 3215, pp. 529–535. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  2. 2.
    Kuncheva, L.I., Hadjitodorov, S.T.: Using Diversity in Cluster Ensembles. In: Proc. IEEE International Conference SMC (2) 2004, The Hague, The Netherlands, pp. 1214–1219 (2004)Google Scholar
  3. 3.
    Cox, R.J., Crowther, P.S.: An Empirical Investigation into the Error Characteristcis of Neural Networks. In: Proceedings AISAT 2004 The 2nd International Conference on Artificial Intelligence in Science and Technology, Hobart, Australia, November 21-25, 2004, pp. 92–97 (2004)Google Scholar
  4. 4.
    Crowther, P., Cox, R., Sharma, D.: A Study of the Radial Basis Function Neural Network Classifiers using Known Data of Varying Accuracy and Complexity. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds.) KES 2004. LNCS (LNAI), vol. 3215, pp. 210–216. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  5. 5.
    Crowther, P.S., Cox, R.J.: A Method for Optimal Division of Data Sets for use in Neural Networks. In: Khosla, R., Howlett, R.J., Jain, L.C. (eds.) KES 2005. LNCS (LNAI), vol. 3684, pp. 1–7. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  6. 6.
    Machine learning repository: http://www.ics.uci.edu/~mlearn/MLSummary.html
  7. 7.
    Nash, W.J., Sellers, T.L., Talbot, S.R., Cawthorn, A.J., Ford, W.B.: The Population Biology of Abalone (Haliotis species) in Tasmania. I. Blacklip Abalone (H. rubra) from the North Coast and Islands of Bass Strait. Sea Fisheries Division Technical Report #48 (1994)Google Scholar
  8. 8.
    Mangasarian, O.L., Wolberg, W.H.: Cancer Diagnosis via Linear Programming. SIAM News 23(5), 1–18 (1990)Google Scholar
  9. 9.
    Guvenir, H.A., Demiroz, G., Ilter, N.: Learning Differential Diagnosis of Eryhemato-Squamous Diseases using Voting Feature Intervals. Artificial Intelligence in Medicine 13(3), 147–165 (1998)CrossRefGoogle Scholar
  10. 10.
    Cox, R., Clark, D., Richardson, A.: An Investigation into the Effect of Ensemble Size and Voting Threshold on the Accuracy of Neural Network Ensembles. In: The 12th Australian Joint Conference on Artificial Intelligence (AI 1999), Sydney, December 1999, pp. 268–277 (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Patricia S. Crowther
    • 1
  • Robert J. Cox
    • 1
  1. 1.School of Information Sciences and EngineeringUniversity of CanberraAustralia

Personalised recommendations