Abstract
Learning is a fundamental ability of many living organisms. It leads to the development of new skills, values, understanding, and preferences. Improved learning learning capabilities catalyze the evolution and may distinguish entire species with respect to the activities they are able to perform. The importance of learning learning is, thus, beyond question. Learning covers a broad range of tasks. Some tasks are particularly interesting because they can be mathematically modeled. This makes natural to wonder whether computers might be made, or programmed, to learn (Turing, Can digital computers think? A talk on BBC Third Programme, 15 May 1951). A deep understanding of how to program computers to learn is still far, but it would be of great impact because it would increase the spectrum of problems that computers can solve. Candidate problems range between two extremes: structured problems for which the solution is totally defined [and thus are easily programmed by humans (Hutter, The fastest and shortest algorithm for all well-defined problems. Int J Found Comp Sci 13(3), 431–443 2002)], and random problems for which the solution is completely undefined, and thus cannot be programmed. Problems in the vast middle ground have solutions that cannot be well defined and are, thus, inherently hard to program (no one could foresee and provide solutions for all possible situations). Machine learning is one possible strategy to handle this vast middle ground and many tedious and difficult hand-coding tasks would be replaced by automatic learning learning methods. In this book we investigate an important learning method which is known as classification classification.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Agrawal, R., Imielinski, T., Swami, A.: Mining association rules between sets of items in large databases. In: Proceedings of the International Conference on Management of Data (SIGMOD), pp. 207–216. ACM Press (1993)
Evgeniou, T., Pontil, M., Poggio, T.: Statistical learning theory: a primer. Int. J. Comp. Vis. 38(1), 9–13 (2000)
Liu, B., Hsu, W., Ma, Y.: Integrating classification and association rule mining. In: Proceedings of the Conference on Data Mining and Knowledge Discovery (KDD), pp. 80–86. ACM Press (1998)
Poggio, T., Girosi, F.: A sparse representation for function approximation. Neural Comput. 10(6), 1445–1454 (1998)
Rahimi, A., Recht, B.: Uniform approximating functions with random bases. In: Allerton, pp. 43–49. SIAM (2008)
Valiant, L.: A theory of the learnable. Commun. ACM 27(11), 1134–1142 (1984)
Witten, I., Frank, E.: Data mining: practical machine learning tools and techniques. Morgan Kaufmann, San Francisco (2005)
Wu, X., Kumar, V., Quinlan, J., Ghosh, J., Yang, Q., Motoda, H., McLachlan, G., Ng, A., Liu, B., Yu, P., Zhou, Z., Steinbach, M., Hand, D., Steinberg, D.: Top 10 algorithms in data mining. Knowl. Inf. Syst. 14(1), 1–37 (2008)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2011 Adriano Veloso
About this chapter
Cite this chapter
Veloso, A., Meira, W. (2011). Introduction. In: Demand-Driven Associative Classification. SpringerBriefs in Computer Science. Springer, London. https://doi.org/10.1007/978-0-85729-525-5_1
Download citation
DOI: https://doi.org/10.1007/978-0-85729-525-5_1
Published:
Publisher Name: Springer, London
Print ISBN: 978-0-85729-524-8
Online ISBN: 978-0-85729-525-5
eBook Packages: Computer ScienceComputer Science (R0)