Abstract
This paper presents results comparing three simple inductive learning systems using different representations for concepts, namely: CNF formulae, DNF formulae, and decision trees. The CNF learner performs surprisingly well. Results on five natural data sets indicates that it frequently trains faster and produces more accurate and simpler concepts.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984).Classification and Regression Trees. Monterey, CA: Wadsworth and Brooks.
Buntine, W., & Niblett, T. (1992). A further comparison of splitting rules for decision-tree induction,Machine Learning, 8(1):75–86.
Clark, P., & Niblett, T. (1989). The CN2 induction algorithm.Machine Learning, 3:261–284.
Fayyad, U.M., & Irani, K.B. (1992). On the handling of continuous-valued attributes in decision-tree generation.Machine Learning, 8(1):87–102.
Matheus, C.J., & Rendell, L.A. (1989). Constructive induction on decision trees. InProceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, MI, 645–650.
Michalksi, R., Mozetic, I., Hong, J., & Lavrac, N. (1986). The multi-purpose incremental learning system AQ15 and its testing application to three medical domains. InProceedings of the Fifth National Conference on Artificial Intelligence, Philadelphia, PA, 1041–1045.
Michalski, R.S. (1975). Synthesis of optimal and quasi-optimal variable valued logic formulas. InProceedings of the 1975 International Symposium on Multiple-Valued Logic, Bloomington, IN, 76–87.
Michalski, R.S. (1983). A theory and methodology of inductive learning. In R.S. Michalski, J.G. Carbonell, & T.M. Mitchell, (Eds.),Machine Learning: An Artificial Intelligence Approach, Tioga, 83–134.
Michalski, R.S., & Chilausky, S. (1980). Learning by being told and learning from examples: An experimental comparison of the two methods of knowledge acquisition in the context of developing an expert system for soybean disease diagnosis.Journal of Policy Analysis and Information Systems, 4:126–161.
Muggleton, S. (1987). Duce, an oracle based approach to constructive induction. InProceedings of the Tenth International Joint Conference on Artificial Intelligence, Milan, Italy, 287–292.
Murphy, P.M., & Aha, D.W. (1993). UCI repository of machine learning databases (machine-readable data repository at ics.uci.edu). Department of Information and Computer Science, University of California, Irvine, CA.
Noordewier, M.O., Towell, G.G., & Shavlik, J.W. (1991). Training knowledge-based neural networks to recognize genes in DNA sequences. InAdvances in Neural Information Processing Systems, 3, San Mateo, CA: Morgan Kaufman.
Pagallo, G. (1990).Adaptive Decision Tree Algorithms for Learning from Examples. PhD thesis, University of California, Santa Cruz, CA.
Pagallo, G., & Haussler, D. (1990). Boolean feture discovery in empirical learning.Machine Learning, 5:71–100.
Pitt, L., & Valiant, L.G. (1988). Computational limitations on learning from examples.Journal of the Association for Computing Machinery, 35(4):965–984.
Porter, B., Bareiss, R., & Holte, R. (1990). Concept learning and heuristic classification in weak-theory domains.Artificial Intelligence, 45:229–263.
Quinlan, J.R. (1986). Induction of decision trees.Machine Learning, 1(1):81–106.
Quinlan, J.R. (1989). Unknown attribute values in induction. InProceedings of the Sixth International Workshop on Machine Learning, Ithaca, NY, 164–168.
Quinlan, J.R. (1990). Learning logical definitions from relations.Machine Learning, 5:239–266.
Schlimmer, J.C. (1987).Concept Acquisition Through Representational Adjustment. PhD thesis, Department of Information and Computer Science, University of California, Irvine, CA.
Shavlik, J.W., Mooney, R.J., & Towell, G.G. (1991). Symbolic and neural learning algorithms: An experimental comparison.Machine Learning, 6:111–143.
Towell, G.G., Shavlik, J.W., & Noordewier, M.O. (1990). Refinement of approximate domain theories by knowledge-based artificial neural networks. InProceedings of the Eighth National Conference on Artificial Intelligence, Boston, MA, 861–866.
Valiant, L.G. (1984). A theory of the learnable.Communications of the Association for Computing Machinery, 27(11):1134–1142.
Yang, D., Rendell, L., & Blix, G. (1991). A scheme for feature construction and a comparison of empirical methods. InProceedings of the Twelfth International Joint Conference on Artificial Intelligence, Sydney, Australia, 699–704.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Mooney, R.J. Encouraging experimental results on learning CNF. Mach Learn 19, 79–92 (1995). https://doi.org/10.1007/BF00994661
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.1007/BF00994661