Abstract
In concept learning and data mining, a typical objective is to determine concept descriptions or patterns that will classify future data points as correctly as possible. If one can assume that the data contain no noise, then it is desirable that descriptions are complete and consistent with regard to all the data, i.e., they characterize all data points in a given class (positive examples) and no data points outside the class (negative examples).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Baim, P.W., “The PROMISE Method for Selecting Most Relevant Attributes for Inductive Learning Systems,” Reports of the Department of Computer Science, Report No. UIUCDCS-F-82-898, University of Illinois, Urbana, 1982.
Bergadano, F., Matwin, S., Michalski R.S. and Zhang, J., “Learning Two-tiered Descriptions of Flexible Concepts: The POSEIDON System,” Machine Learning 8, pp. 5–43, 1992.
Bruha, I., “Quality of Decision Rules: Definitions and Classification Schemes for Multiple Rules,” In Nakhaeizadeh, G. and Taylor, C.C. (eds.), Machine Learning and Statistics, The Interface, New York: John Wiley & Sons, Inc., pp. 107–131, 1997.
Clark, P. and Boswell, R., “Rule Induction with CN2: Some Recent Improvements,” in Kodratoff, Y. (ed.), Proceedings of the Fifth European Working Session on Learning (EWSL-91), Berlin: Springer-Verlag, pp. 151–163, 1991.
Clark, P. and Niblett, T., “The CN2 Induction Algorithm,” Machine Learning 3, pp. 261–283, 1989.
Cohen, W., “Fast Effective Rule Induction,” Proceedings of the Twelfth International Conference on Machine Learning, Lake Tahoe, CA, pp. 115–123, 1995.
Fayyad, U.M, Piatetsky-Shapiro, G., Smyth, P. and Uthurusamy, R. (eds.), Advances in Knowledge Discovery and Data Mining, Menlo Park, CA: AAAI Press, 1996.
Fürnkranz, J. and Widmer, G., “Incremental Reduced Error Pruning,” Proceedings of the Eleventh International Conference on Machine Learning, New Brunswick, NJ, pp. 70–77, 1994.
Kaufman, K.A., “INLEN: A Methodology and Integrated System for Knowledge Discovery in Databases,” Ph.D. dissertation, Reports of the Machine Learning and Inference Laboratory, MLI 97-15, George Mason University, Fairfax, VA, 1997.
Kaufman, K.A. and Michalski, R.S., “Learning in an Inconsistent World: Rule Selection in AQ18,” Reports of the Machine Learning and Inference Laboratory, MLI 99-2, George Mason University, Fairfax, VA, 1999.
Kaufman, K.A. and Michalski, R.S., “Learning from Inconsistent and Noisy Data: The AQ18 Approach,” Proceedings of the Eleventh International Symposium on Methodologies for Intelligent Systems, Warsaw, pp. 411–419, 1999.
Kaufman, K.A. and Michalski, R.S., “The AQ18 System for Machine Learning: User’s Guide,” Reports of the Machine Learning and Inference Laboratory, MLI 00-3, George Mason University, Fairfax, VA, 2000.
Michalski, R.S., “A Theory and Methodology of Inductive Learning,” In Michalski, R.S. Carbonell, J.G. and Mitchell, T.M. (eds.), Machine Learning: An Artificial Intelligence Approach, Palo Alto: Tioga Publishing, pp. 83–129, 1983.
Michalski, R.S., “NATURAL INDUCTION: Theory, Methodology and Applications to Machine Learning and Knowledge Mining,” Reports of the Machine Learning and Inference Laboratory, MLI 01-1, George Mason University, 2001.
Michalski, R.S., Mozetic, I., Hong, J. and Lavrac, N., “The Multi-Purpose Incremental Learning System AQ15 and Its Testing Application to Three Medical Domains,” Proceedings of the National Conference on Artificial Intelligence, AAAI, Philadelphia, pp. 1041–1045, 1986.
Piatetsky-Shapiro, G., “Discovery, Analysis, and Presentation of Strong Rules,” in Piatetsky-Shapiro, G. and Frawley, W. (eds.), Knowledge Discovery in Databases, Menlo Park, CA: AAAI Press, pp. 229–248, 1991.
Quinlan, J.R., “Induction of Decision Trees,” Machine Learning 1, pp. 81–106, 1986.
Wnek, J., Kaufman, K., Bloedorn, E. and Michalski, R.S., “Inductive Learning System AQ15c: The Method and User’s Guide,” Reports of the Machine Learning and Inference Laboratory, MLI 95-4, George Mason University, Fairfax, VA, 1995.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Michalski, R.S., Kaufman, K.A. (2001). Learning Patterns in Noisy Data: The AQ Approach. In: Paliouras, G., Karkaletsis, V., Spyropoulos, C.D. (eds) Machine Learning and Its Applications. ACAI 1999. Lecture Notes in Computer Science(), vol 2049. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44673-7_2
Download citation
DOI: https://doi.org/10.1007/3-540-44673-7_2
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42490-1
Online ISBN: 978-3-540-44673-6
eBook Packages: Springer Book Archive