Skip to main content

Supervised Learning

  • Chapter
  • 712 Accesses

Part of the book series: Springer Tracts in Advanced Robotics ((STAR,volume 61))

Introduction

In a supervised learning task we are interested in finding a function that maps a set of given examples into a set of classes or categories. This function, called classifier, will be used later to classify new examples, which in general are different from the given ones. The process of modeling the classifier is called training, and the algorithm used during the training is called learning algorithm. The initial given examples used by the learning algorithm are called training examples. The training examples are usually provided by an external agent to the robot also called tutor. Once the classifier is learned, it should be tested on new unseen examples to evaluate its performance. These new examples are called test examples.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Freund, Y.: Boosting a weak learning algorithm by majority. In: COLT: Proceedings of the Workshop on Computational Learning Theory. Morgan Kaufmann, San Francisco (1990)

    Google Scholar 

  2. Freund, Y.: Data filtering and distribution modeling algorithms for machine learning. PhD thesis, University of California at Santa Cruz (1993)

    Google Scholar 

  3. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. In: Proceedings of the European Conference on Computational Learning Theory, pp. 23–37 (1995)

    Google Scholar 

  4. Freund, Y., Schapire, R.E.: A short introduction to boosting. Journal of Japanase Society for Artificial Intelligence 14(5), 771–780 (1999)

    Google Scholar 

  5. Grove, A.J., Schuurmans, D.: Boosting in the limit: Maximizing the margin of learned ensembles. In: Proceedings of the National Conference on Artificial Intelligence, pp. 692–699 (1998)

    Google Scholar 

  6. Kearns, M., Valiant, L.G.: Cryptographic limitations on learning boolean formulae and finite automata. Journal of the ACM 41(1), 67–95 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  7. Kearns, M.J., Valiant, L.G.: Learning boolean formulae or finite automata is as hard as factoring. Technical Report TR-14-88, Harvard University Aiken Computation Laboratory (August 1988)

    Google Scholar 

  8. Meir, R., Rätsch, G.: An introduction to boosting and leveraging. In: Mendelson, S., Smola, A.J. (eds.) Advanced Lectures on Machine Learning. LNCS (LNAI), vol. 2600, pp. 118–183. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  9. Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)

    MATH  Google Scholar 

  10. Rätsch, G., Onoda, T., Müller, K.R.: Soft margins for adaboost. Machine Learning 42(3), 287–320 (2001)

    Article  MATH  Google Scholar 

  11. Schapire, R.E.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)

    Google Scholar 

  12. Schapire, R.E.: The boosting approach to machine learning: An overview. In: MSRI Workshop on Nonlinear Estimation and Classification (2001)

    Google Scholar 

  13. Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Machine Learning 37(3), 297–336 (1999)

    Article  MATH  Google Scholar 

  14. Theodoridis, S., Koutroumbas, K.: Pattern Recognition, 3rd edn. Academic Press, London (2006)

    MATH  Google Scholar 

  15. Valiant, L.G.: A theory of the learnable. Communications of the ACM 27(11), 1134–1142 (1984)

    Article  MATH  Google Scholar 

  16. Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)

    MATH  Google Scholar 

Download references

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Mozos, Ó.M. (2010). Supervised Learning. In: Semantic Labeling of Places with Mobile Robots. Springer Tracts in Advanced Robotics, vol 61. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-11210-2_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-11210-2_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-11209-6

  • Online ISBN: 978-3-642-11210-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics