Skip to main content

Machine Learning

  • Chapter
  • First Online:
Search Methodologies

Abstract

Machine learning is a very active sub-field of artificial intelligence concerned with the development of computational models of learning. Machine learning is inspired by the work in several disciplines: cognitive sciences, computer science, statistics, computational complexity, information theory, control theory, philosophy and biology. Simply speaking, machine learning is learning by machine. From a computational point of view, machine learning refers to the ability of a machine to improve its performance based on previous results. From a biological point of view, machine learning is the study of how to create computers that will learn from experience and modify their activity based on that learning as opposed to traditional computers whose activity will not change unless the programmer explicitly changes it.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Battiti R, Colla AM (1994) Democracy in neural nets: voting schemes for classification. Neural Netw 7:691–707

    Article  Google Scholar 

  • Breiman L (1996) Bagging predictors. Mach Learn 24:123–140

    Google Scholar 

  • Breiman L, Friedman J, Olshen RA, Stone PJ (1984) Classification and regression trees. Wadsworth, Belmont

    Google Scholar 

  • Chandra A, Yao X (2006) Ensemble learning using multi-objective evolutionary algorithms. J Math Model Algorithms 5:417–445

    Article  Google Scholar 

  • Chen H, Yao X (2009) Regularized negative correlation learning for neural network ensembles. IEEE Trans Neural Netw 20:1962–1979

    Article  Google Scholar 

  • Chen H, Yao X (2010) Multiobjective neural network ensembles based on regularized negative correlation Learning. IEEE Trans Knowl Data Eng 22:1738–1751

    Article  Google Scholar 

  • Cheng J, Greiner R, Kelly J, Bell DA, Liu W (2002) Learning Bayesian networks from data: an information-theory based approach. Artif Intell 137:43–90

    Article  Google Scholar 

  • Clemen RT, Winkler RL (1985) Limits for the precision and value of information from dependent sources. Oper Res 33:427–442

    Article  Google Scholar 

  • Dietterich TG (1997) Machine-learning research: four current directions. AI Mag 18:97–136

    Google Scholar 

  • Domingos P, Pazzani M (1996) Beyond indpendence: conditions for the optimality of the simple Bayesian classifier. In: Saitta L (ed) Proceedings of the 13th international conference on machine learning, Bari. Morgan Kaufmann, San Mateo, pp 105–112

    Google Scholar 

  • Drucker H, Schapire R, Simard P (1993) Improving performance in neural networks using a boosting algorithm. In: Hanson SJ et al (eds) Advances in neural information processing systems 5. Morgan Kaufmann, San Mateo, pp 42–49

    Google Scholar 

  • Drucker H, Cortes C, Jackel LD, LeCun Y, Vapnik V (1994) Boosting and other ensemble methods. Neural Comput 6:1289–1301

    Article  Google Scholar 

  • Efron B, Tibshirani RJ (1993) An introduction to the bootstrap. Chapman and Hall, London

    Google Scholar 

  • Elkan C (1997) Boosting and naive Bayesian learning. Technical report, Department of Computer Science and Engineering, University of California

    Google Scholar 

  • Feigenbaum EA (1961) The simulation of verbal learning behavior. In: Proceedings of the western joint computer conference, Los Angeles, pp 121–131

    Google Scholar 

  • Fogel DB (1995) Evolutionary computation: towards a new philosophy of machine intelligence. IEEE, New York

    Google Scholar 

  • Fogel LJ, Owens AJ, Walsh MJ (1966) Artificial intelligence through simulated evolution. Wiley, New York

    Google Scholar 

  • Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: Proceedings of the 13th international conference on machine learning, Bari. Morgan Kaufmann, San Mateo, pp 148–156

    Google Scholar 

  • Geman S, Bienenstock E, Doursat R (1992) Neural networks and the bias/variance dilemma. Neural Comput 4:1–58

    Article  Google Scholar 

  • Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12:993–1001

    Article  Google Scholar 

  • Hebb DO (1949) The organization of behavior: a neurophysiological theory. Wiley, New York

    Google Scholar 

  • Heckerman D (1998) A tutorial on learning with Bayesian networks. In: Jordan MI (ed) Learning in graphical models. Kluwer, Dordrecht

    Google Scholar 

  • Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Nat Acad Sci USA 79:2554–2558

    Article  Google Scholar 

  • Hopfield JJ, Tank DW (1985) Neural computation of decisions in optimization problems. Biol Cybern 52:141–152

    Google Scholar 

  • Hunt EB, Marin J, Stone PT (1966) Experiments in induction. Academic, New York

    Google Scholar 

  • Islam MM, Yao X, Murase K (2003) A constructive algorithm for training cooperative neural network ensembles. IEEE Trans Neural Netw 14:820–834

    Article  Google Scholar 

  • Jacobs RA (1997) Bias/variance analyses of mixture-of-experts architectures. Neural Comput 9:369–383

    Article  Google Scholar 

  • Jacobs RA, Jordan MI, Barto AG (1991a) Task decomposition through competition in a modular connectionist architecture: the what and where vision task. Cogn Sci 15:219–250

    Article  Google Scholar 

  • Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE (1991b) Adaptive mixtures of local experts. Neural Comput 3:79–87

    Article  Google Scholar 

  • Jordan MI, Jacobs RA (1994) Hierarchical mixtures-of-experts and the EM algorithm. Neural Comput 6:181–214

    Article  Google Scholar 

  • Kaelbling LP, Littman ML, Moore AW (1996) Reinforcement learning: a survey. J Artif Intell Res 4:237–285

    Google Scholar 

  • Kim J, Ahn J, Cho S (1995) Ensemble competitive learning neural networks with reduced input dimensions. Int J Neural Syst 6:133–142

    Article  Google Scholar 

  • Kodratoff Y, Michalski RS (eds) (1990) Machine learning—an artificial intelligence approach 3. Morgan Kaufmann, San Mateo

    Google Scholar 

  • Krogh A, Vedelsby J (1995) Neural network ensembles, cross validation, and active learning. In: Tesauro G et al (eds) Advances in neural information processing systems 7. MIT, Cambridge, pp 231–238

    Google Scholar 

  • Langley P (1996) Elements of machine learning. Morgan Kaufmann, San Francisco

    Google Scholar 

  • Langley P, Simon H (1995) Applications of machine learning and rule induction. Commun ACM 38:54–64

    Article  Google Scholar 

  • Lavrač N, Džeroski S (1994) Inductive logic programming: techniques and applications. Ellis Horwood, Chichester

    Google Scholar 

  • Liu Y, Yao X (1998a) Negatively correlated neural networks can produce best ensembles. Aust J Intell Inf Process Syst 4:176–185

    Google Scholar 

  • Liu Y, Yao X (1998b) A cooperative ensemble learning system. In: Proceedings of the IJCNN 1998, Anchorage. IEEE, Piscataway, pp 2202–2207

    Google Scholar 

  • Liu Y, Yao X (1999a) Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans Syst Man Cybern B 29:716–725

    Article  Google Scholar 

  • Liu Y, Yao X (1999b) Ensemble learning via negative correlation. Neural Netw 12:1399–1404

    Article  Google Scholar 

  • Liu Y, Yao X, Higuchi T (2000) Evolutionary ensembles with negative correlation learning. IEEE Trans Evol Comput 4:380–387

    Article  Google Scholar 

  • Liu Y, Yao X, Higuchi T (2001) Ensemble learning by minimizing mutual information. In: Proceedings of the 2nd international conference on software engineer, artificial intelligence, networking and parallel/distributed computing, Nagoya. International association for computer and information science, pp 457–462

    Google Scholar 

  • Liu Y, Yao X, Zhao Q, Higuchi T (2002) An experimental comparison of neural network ensemble learning methods on decision boundaries. In: Proceedings of the IJCNN 2002, Honolulu. IEEE, Piscataway, pp 221–226

    Google Scholar 

  • McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5:115–137

    Article  Google Scholar 

  • Meir R (1995) Bias, variance, and the combination of least squares estimators. In: Tesauro G, Touretzky DS, Leen TK (eds) Advances in neural information processing systems 7. MIT, Cambridge, pp 295–302

    Google Scholar 

  • Michalski RS, Carbonell JG, Mitchell TM (eds) (1983) Machine learning—an artificial intelligence approach 1. Morgan Kaufmann, San Mateo

    Google Scholar 

  • Michalski RS, Carbonell JG, Mitchell TM (eds) (1986) Machine learning—an artificial intelligence approach 2. Morgan Kaufmann, San Mateo

    Google Scholar 

  • Michie D, Spiegelhalter DJ, Taylor CC (1994) Machine learning, neural and statistical classification. Ellis Horwood, London

    Google Scholar 

  • Minku LL, White A, Yao X (2010) The impact of diversity on on-line ensemble learning in the presence of concept drift. IEEE Trans Knowl Data Eng 22:730–742

    Article  Google Scholar 

  • Minsky ML, Papert S (1969) Perceptrons: an introduction to computational geometry. MIT, Cambridge

    Google Scholar 

  • Mitchell TM (1997) Machine learning. McGraw-Hill, New York

    Google Scholar 

  • Muggleton SH (1995) Inverse entailment and progol. New Gener Comput 13:245–286

    Article  Google Scholar 

  • Muggleton SH, Buntine W (1988) Machine invention of first-order predicates by inverting resolution. In: Proceedings of the 5th international conference on machine learning, Ann Arbor. Morgan Kaufmann, San Mateo, pp 339–352

    Google Scholar 

  • Opitz DW, Shavlik JW (1996) Actively searching for an effective neural network ensemble. Connect Sci 8:337–353

    Article  Google Scholar 

  • Perrone MP, Cooper LN (1993) When networks disagree: ensemble methods for hybrid neural networks. In: Mammone RJ (ed) Neural networks for speech and image processing. Chapman and Hall, London

    Google Scholar 

  • Quinlan JR (1986) Introduction to decision tree. Mach Learn 1:81–106

    Google Scholar 

  • Quinlan JR (1990) Learning logical definitions from relations. Mach Learn 5:239–266

    Google Scholar 

  • Quinlan JR (1993) C4.5: programs for machine learning. Morgan Kaufmann, San Mateo

    Google Scholar 

  • Raviv Y, Intrator N (1996) Bootstrapping with noise: an effective regularization technique. Connect Sci 8:355–372

    Article  Google Scholar 

  • Rogova G (1994) Combining the results of several neural networks classifiers. Neural Netw 7:777–781

    Article  Google Scholar 

  • Rosen BE (1996) Ensemble learning using decorrelated neural networks. Connect Sci 8:373–383

    Article  Google Scholar 

  • Rosenblatt F (1962) Principles of neurodynamics: perceptrons and the theory of brain mechanisms. Spartan, Chicago

    Google Scholar 

  • Rumelhart DE, McClelland JL (ed) (1986) Parallel distributed processing: explorations in the microstructures of cognition. MIT, Cambridge

    Google Scholar 

  • Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Rumelhart DE, McClelland JL (eds) Parallel distributed processing: explorations in the microstructures of cognition I. MIT, Cambridge, pp 318–362

    Google Scholar 

  • Russell S, Norvig P (2002) Artificial intelligence: a modern approach. Prentice-Hall, Englewood Cliffs

    Google Scholar 

  • Samuel AL (1959) Some studies in machine learning using the game of checkers. IBM J Res Dev 3:210–229

    Article  Google Scholar 

  • Sarkar D (1996) Randomness in generalization ability: a source to improve it. IEEE Trans Neural Netw 7:676–685

    Article  Google Scholar 

  • Schapire RE (1990) The strength of weak learnability. Mach Learn 5:197–227

    Google Scholar 

  • Schapire RE (1999) Theoretical views of boosting and applications. In: Proceedings of the 10th international conference on algorithmic learning theory, Tokyo. Springer, Berlin, pp 13–25

    Google Scholar 

  • Schwefel HP (1981) Numerical optimization of computer models. Wiley, Chichester

    Google Scholar 

  • Schwefel HP (1995) Evolution and optimum seeking. Wiley, New York

    Google Scholar 

  • Shavlik J, Dietterich T (eds) (1990) Readings in machine learning. Morgan Kaufmann, San Mateo

    Google Scholar 

  • Stone M (1974) Cross-validatory choice and assessment of statistical predictions. J R Stat Soc 36:111–147

    Google Scholar 

  • Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. MIT, Cambridge

    Google Scholar 

  • Tang K, Lin M, Minku FL, Yao X (2009) Selective negative correlation learning approach to incremental learning. Neurocomputing 72:2796–2805

    Article  Google Scholar 

  • Turing A (1950) Computing machinery and intelligence. Mind 59:433–460

    Article  Google Scholar 

  • Vapnik VN (1995) The nature of statistical learning theory. Springer, New York

    Book  Google Scholar 

  • Wang S, Yao X (2009a) Theoretical study of the relationship between diversity and single-class measures for class imbalance learning. In: Proceedings of the IEEE international conference on data mining workshops, Miami. IEEE Computer Society, Washington, DC, pp 76–81

    Google Scholar 

  • Wang S, Yao X (2009b) Diversity exploration and negative correlation learning on imbalanced data sets. In: Proceedings of the IJCNN 2009, Atlanta, pp 3259–3266

    Google Scholar 

  • Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1:67–82

    Article  Google Scholar 

  • Yao X (1991) Evolution of connectionist networks. In: Dartnall T (ed) Preprints of the international symposium on AI, reasoning and creativity, Griffith University, Queensland, pp 49–52

    Google Scholar 

  • Yao X (1993a) A review of evolutionary artificial neural networks. Int J Intell Syst 8:539–567

    Article  Google Scholar 

  • Yao X (1993b) An empirical study of genetic operators in genetic algorithms. Microprocess Microprogr 38:707–714

    Article  Google Scholar 

  • Yao X (1994) The evolution of connectionist networks. In: Dartnall T (ed) Artificial intelligence and creativity. Kluwer, Dordrecht, pp 233–243

    Chapter  Google Scholar 

  • Yao X (1995) Evolutionary artificial neural networks. In: Kent A, Williams JG (eds) Encyclopedia of computer science and technology 33. Dekker, New York, pp 137–170

    Google Scholar 

  • Yao X (1999) Evolving artificial neural networks. Proc IEEE 87:1423–1447

    Article  Google Scholar 

  • Yao X, Liu Y (1997) A new evolutionary system for evolving artificial neural networks. IEEE Trans Neural Netw 8:694–713

    Article  Google Scholar 

  • Yao X, Liu Y (1998) Making use of population information in evolutionary artificial neural networks. IEEE Trans Syst Man Cybern B 28:417–425

    Google Scholar 

  • Yao X, Liu Y, Darwen P (1996) How to make best use of evolutionary learning. In: Stocker R, Jelinek H, Durnota B (eds) Complex systems: from local interactions to global phenomena. IOS, Amsterdam, pp 229–242

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin Yao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media New York

About this chapter

Cite this chapter

Yao, X., Liu, Y. (2014). Machine Learning. In: Burke, E., Kendall, G. (eds) Search Methodologies. Springer, Boston, MA. https://doi.org/10.1007/978-1-4614-6940-7_17

Download citation

Publish with us

Policies and ethics