Abstract
Symbolic classifiers from Artificial Intelligence compete with those from the established and emerging fields of statistics and neural networks. Traditional view is that symbolic classifiers are good in that they are easier to use, are faster, and produce human understandable rules. However, as this paper shows, through a comparison of fourteen established state-of-the-art symbolic, statistical, and neural classifiers on eight large real-world problems, symbolic classifiers also have superior, or at least comparable, accuracy performance when the characteristics of the data suit them. These data characteristics are measured using a set of statistical and qualitative descriptors, first proposed in the present (and the related) work. This has implications for algorithm users and method designers, in that the strength of various algorithms can be exploited in application, and in that superior features of other algorithms can be incorporated into existing algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
S. Acid, S.L. de Campos, A. González, R.Molina, and Pérez de la Blanca. CASTLE: A Tool for Bayesian Learning. In Esprit Conference 1991.
D. Aha. Generalizing case studies: a case study. In 9th Int. Conf. on Machine Learning, pages 1–10, San Mateo, Cal., 1992. Morgan Kaufmann.
L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression Trees. Wadsworth, Belmont, 1984.
R Clark and R. Boswell. Rule induction with CN2: some recent improvements. In EWSL ’91, pages 151–163, Porto, Portugal, 1991. Berlin: Springer-Verlag.
J. H. Friedman and W. Stuetzle. Projection pursuit regression. Journal of American Statistics Association, 76: 817–823, 1981.
R. P. Gorman and T. J. Sejnowski. Analysis of hidden units in a layered network trained to classify sonar targets. Neural networks, 1 (Part 1): 75–89, 1988.
J. Hermans, J. D. F. Habhema, and A. T. Van der Burght. Cases of doubt in allocation problems, k populations. Bulletin of International Statistics Institute, 45: 523–529, 1974.
T. Poggio and F. Girosi. Networks for approximation and learning. Proceedings of the IEEE, 78 (9, September): 1481–1497, 1990.
J. R. Quinlan. Simplifying decision trees. International journal of man-machine studies, 27 ((3) September): 221–234, 1987.
B. D. Ripley. Statistical aspects of neural networks. In Invited talk at SemSat, Sandbjerg, Denmark, April 1992. Chapman and Hall, 1992.
J. W. Shavlik, R. J. Mooney, and G. G. Towell. Symbolic and neural learning algorithms: an experimental comparison. Journal of Machine learning, 6 (2, March): 111–143, 1991.
S. Thrun, J. Bala, E. Bloedorn, and I. Bratko, editors, The MONK’s problems - a performance comparison of different learning algorithms. Carnegie Mellon University, Computer Science Department, 1991.
S. M. Weiss and C. A. Kulikowski. Computer systems that learn: classification and pre-diction methods from statistics, neural networks, machine learning, and expert systems. Morgan Kaufmann, San Mateo, CA, 1991.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1994 Springer-Verlag New York, Inc.
About this paper
Cite this paper
Feng, C., King, R., Sutherland, A., Muggleton, S., Henery, R. (1994). Symbolic Classifiers: Conditions to Have Good Accuracy Performance. In: Cheeseman, P., Oldford, R.W. (eds) Selecting Models from Data. Lecture Notes in Statistics, vol 89. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-2660-4_38
Download citation
DOI: https://doi.org/10.1007/978-1-4612-2660-4_38
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-94281-0
Online ISBN: 978-1-4612-2660-4
eBook Packages: Springer Book Archive