Abstract
Constructive induction divides the problem of learning an inductive hypothesis into two intertwined searches: one-for the “best” representation space, and two-for the “best” hypothesis in that space. In datadriven constructive induction (DCI), a learning system searches for a better representation space by analyzing the input examples (data). The presented datadriven constructive induction method combines an AQ-type learning algorithm with two classes of representation space improvement operators: constructors, and destructors. The implemented system, AQ17-DCI, has been experimentally applied to a GNP prediction problem using a World Bank database. The results show that decision rules learned by AQ17-DCI outperformed the rules learned in the original representation space both in predictive accuracy and rule simplicity.
Keywords
- Representation Space
- Compare International Development
- Target Concept
- Space Contraction
- Attribute Construction
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Preview
Unable to display preview. Download preview PDF.
References
Baim, P.W., “The PROMISE Method for Selecting the Most Relevant Attributes for Inductive Learning Systems”, Rep. No. UIUCDCS-F-82-898, Dept. of Computer Science, University of Illinois-Urbana Champaign, IL, 1982.
Bloedorn, E. and Michalski, R.S. “Data-Driven Constructive Induction in AQ17-PRE: A Method and Experiments”, Proceedings of the Third International Conference on Tools for AI, November 1991a.
Bloedorn, E. and Michalski, R.S., “Constructive Induction from Data in AQ17-DCI: Further Experiments,” Reports of the Machine Learning and Inference Laboratory, MLI 91-12, School of Information Technology and Engineering, George Mason University, Fairfax, VA, December 1991b.
Bloedorn, E., Michalski, R., and Wnek, J., “Multistrategy Constructive Induction,” Proceedings of the Second International Workshop on Multistrategy Learning,” Harpers Ferry, WV, May 26–29, 1993.
Bloedorn, E., Michalski, R.S., and Wnek, J., “Matching Methods with Problems: A Comparative Analysis of Constructive Induction Approaches”, Reports of the Machine Learning and Inference Laboratory, MLI 94-2, George Mason University, Fairfax, VA, 1994.
Bloedorn, E. and Kaufman, K., “Data-Driven Constructive Induction in INLEN”, Reports of the Machine Learning and Inference Laboratory, George Mason University, Fairfax, VA, 1996 (to appear).
Dougherty, J., Kohavi, R., and Sahami, M., “Supervised and Unsupervised Discretization of Continuous Features”, Proceedings of the Twelfth International Conference on Machine Learning, pp. 194–201, June 1995.
Fulton, T., Kasif, S., and Salzberg S., “Efficient Algorithms for Finding Multi-way Splits for Decision Trees”, Proceedings of the Twelfth International Conference on Machine Learning, pp. 244–251., June 1995.
Greene, G.H., “Quantitative Discovery: Using Dependencies to Discover Non-Linear Terms”, M.S. Thesis, University of Illinois at Urbana-Champaign, 1988.
Gryzmala-Busse, J.W., “LERS — A System for Learning from Examples based on Rough Sets” in Slowinski, R., Ed. Intelligent Decision Support Handbook of Applications and Advances of the Rough Sets Theory, Kluwer Academic Publishers, pp. 3–18. 1992.
Jensen, G., “SYM-1: A Program that Detects Symmetry of Variable-Valued Logic Functions”, Report UIUCDCS-R-75-729, Department of Computer Science, University of Illinois at Urbana-Champaign, 1975.
Kaufman, K., “Comparing International Development Patterns Using Multi-Operator Learning and Discovery Tools”, Proceedings of the AAAI-94 Workshop on Knowledge Discovery in Databases, Seattle, WA, pp. 431–440. 1994.
Kerber, R., “ChiMerge: Discretization of Numeric Attributes”, Proceedings of the Tenth National Conference on Artificial Intelligence, pp. 123–128, San Jose, CA, 1992.
Langley, P., “Rediscovering Physics with Bacon 3,” Fifth International Joint Conference on Artificial Intelligence, pp. 505–507, Cambridge, MA:, 1977.
Lenat, Douglas, “Learning from Observation and Discovery”, in Machine Learning: An Artificial Intelligence Approach, Vol. I, R.S. Michalski, J.G. Carbonell and T.M. Mitchell (Eds.). Palo Alto, CA: Morgan Kaufmann (reprint), 1983
Matheus, C. J. and Rendell, L.A., “Constructive Induction on Decision Trees”, In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, pp. 645–650, 1989.
Michalski, R.S., “Recognition of Total or Partial Symmetry in a Completely or Incompletely Specified Switching Function,” Proceedings of the IV Congress of the International Federation on Automatic Control (IFAC), Vol. 27 (Finite Automata and Switching Systems), pp. 109–129, Warsaw, June 16–21, 1969.
Michalski, R.S., “On the Quasi-Minimal Solution of the Covering Problem” Proceedings of the V International Symposium on Information Processing (FCIP 69), Vol. A3 (Switching Circuits), Bled, Yugoslavia, pp. 125–128, 1969.
Michalski, R.S. and McCormick, B.H., “Interval Generalization of Switching Theory.” Report No. 442, Dept. of Computer Science, University of Illinois, Urbana. 1971.
Michalski, R.S., “Variable-Valued Logic: System VL1, Proceedings of the 1974 International Symposium on Multiple-Valued Logic, pp. 323–346. West Virginia University, Morgantown, 1974.
Michalski, R.S. and Larson, J.B., “Selection of Most Representative Training Examples and Incremental Generation of VL1 Hypotheses: the underlying methodology and the description of programs ESEL and AQ11, “ Report No. 867, Dept of Computer Science, University of Illinois, Urbana, 1978.
Michalski, R.S., “Pattern Recognition as Rule-Guided Inductive Inference,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 2, No. 4, pp. 349–361, 1980.
Michalski, R.S., “A Theory and Methodology of Inductive Learning: Developing Foundations for Multistrategy Learning,” in Machine Learning: An Artificial Intelligence Approach, Vol. I, R.S. Michalski, J.G. Carbonell and T.M. Mitchell (Eds.), Palo Alto, CA: Morgan Kaufmann (reprint), 1983.
Michalski, R.S., “Inferential Theory of Learning,” in Machine Learning: An Multistrategy Approach, Vol. IV, R.S. Michalski, and G. Tecuci (Eds.), Palo Alto, CA: Morgan Kaufmann, 1994.
Muggleton, S., “Duce, an Oracle-Based Approach to Constructive Induction”, Proceedings of IJCAI-87, pp. 287–292, Morgan Kaufman, Milan, Italy, 1987.
Pagallo, G., and Haussler, D., “Boolean Feature Discovery in Empirical Learning”, Machine Learning, vol. 5, pp. 71–99, 1990.
Pawlak, Z. “Rough Sets and their Applications”, Workshop on Mathematics and AI, Schloss Reisburg, W. Germany. Vol II. pp. 543–572. 1988.
Pawlak, Z. “Rough Sets: Theoretical Aspects of Reasoning about Data”, Kluwer Academic Publishers, AA Dordrecht, The Netherlands, 1991.
Quinlan, J. R., “Learning Efficient Classification Procedures,” Machine Learning: An Artificial Intelligence Approach, Michalski, R.S., Carbonell, J.G, and Mitchell, T.M. (Eds.), Morgan Kaufmann 1983, pp. 463–482.
Quinlan, J.R., “C4.5: Programs for Machine Learning”, Morgan Kaufmann, San Mateo, CA, 1993.
Reinke, R.E., “Knowledge Acquisition and Refinement Tools for the ADVISE Meta-expert System,” Master's Thesis, University of Illinois, 1984.
Rendell, L., and Seshu, R., “Learning Hard Concepts Through Constructive Induction: Framework and Rationale,” Computer Intelligence, Vol. 6, pp. 247–270, 1990.
Schlimmer, J., “Concept Acquisition Through Representational Adjustment,” Machine Learning, Vol. 1, pp. 81–106, 1986.
Thrun, S.B., Bala, J., Bloedorn, E., Bratko, I., Cestnik, B., Cheng, J., De Jong, K., Dzerowski, S., Fahlman, S.E., Hamann, R., Kaufman, K., Keller, S., Kononenko, I., Kreuziger, J., Michalski, R.S., Mitchell, T., Pachowicz, P., Vafaie, H., Van de Velde, W., Wenzel, W., Wnek, J., and Zhang, J., “The MONK'S Problems: A Performance Comparison of Different Learning Algorithms,” (revised version), Carnegie Mellon University, Pittsburgh, PA, CMU-CS-91-197, 1991.
Utgoff, P., “Shift of Bias for Inductive Learning,”, in Machine Learning: An Artificial Intelligence Approach, Vol. II, R. Michalski, J. Carbonell, and T. Mitchell (eds.), Morgan Kaufman, Los Altos, CA, pp. 107–148, 1986.
Watanabe, L., and Elio, R., “Guiding Constructive Induction for Incremental Learning from Examples,” Proceedings of IJCAI-87, pp. 293–296, Milan, Italy:, 1987.
Weiss, S. M., and Kulikowski, C. A., Computer Systems that Learn, Morgan Kaufmann, San Mateo, CA. 1991.
Wnek, J. and Michalski, R., “Hypothesis-driven Constructive Induction in AQ17-HCI: A Method and Experiments,” Machine Learning, Vol. 14, No. 2, pp. 139–168. 1993.
Wnek, J. “DIAV 2.0 User Manual: Specification and Guide through the Diagrammatic Visualization System,” Reports of the Machine Learning and Inference Laboratory, MLI95-5, George Mason University, Fairfax, VA 1995.
Wnek, I., Kaufman, K., Bloedorn, E., and Michalski, R.S., “Selective Induction Learning System AQ15c: The Method and User's Guide”, Reports of the Machine Learning and Inference Laboratory, MLI 95-4.
Ziarko, W. “On Reduction of Knowledge Representation”, Proceedings of the 2nd International Symposium on Methodologies for Intelligent Systems, Charlotte, NC. North Holland, pp. 99–113.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1996 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bloedorn, E., Michalski, R.S. (1996). The AQ17-DCI system for data-driven constructive induction and its application to the analysis of world economics. In: Raś, Z.W., Michalewicz, M. (eds) Foundations of Intelligent Systems. ISMIS 1996. Lecture Notes in Computer Science, vol 1079. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61286-6_136
Download citation
DOI: https://doi.org/10.1007/3-540-61286-6_136
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-61286-5
Online ISBN: 978-3-540-68440-4
eBook Packages: Springer Book Archive